2017-02-08T00:00:00+00:00
In loving memory of my father,
Shlomo Harari
Years Before the Present |
|
13.5 billion |
Matter and energy appear. Beginning of physics. Atoms and molecules appear. Beginning of chemistry. |
4.5 billion |
Formation of planet Earth. |
3.8 billion |
Emergence of organisms. Beginning of biology. |
6 million |
Last common grandmother of humans and chimpanzees. |
2.5 million |
Evolution of the genus Homo in Africa. First stone tools. |
2 million |
Humans spread from Africa to Eurasia. Evolution of different human species. |
500,000 |
Neanderthals evolve in Europe and the Middle East. |
300,000 |
Daily usage of fire. |
200,000 |
Homo sapiens evolves in East Africa. |
70,000 |
The Cognitive Revolution. Emergence of fictive language. Beginning of history. Sapiens spread out of Africa. |
45,000 |
Sapiens settle Australia. Extinction of Australian megafauna. |
30,000 |
Extinction of Neanderthals. |
16,000 |
Sapiens settle America. Extinction of American megafauna. |
13,000 |
Extinction of Homo floresiensis. Homo sapiens the only surviving human species. |
12,000 |
The Agricultural Revolution. Domestication of plants and animals. Permanent settlements. |
5,000 |
First kingdoms, script and money. Polytheistic religions. |
4,250 |
First empire – the Akkadian Empire of Sargon. |
2,500 |
Invention of coinage – a universal money. |
2,000 |
Han Empire in China. Roman Empire in the Mediterranean. Christianity. |
1,400 |
Islam. |
500 |
The Scientific Revolution. Humankind admits its ignorance and begins to acquire unprecedented power. Europeans begin to conquer America and the oceans. The entire planet becomes a single historical arena. The rise of capitalism. |
200 |
The Industrial Revolution. Family and community are replaced by state and market. Massive extinction of plants and animals. |
The Present |
Humans transcend the boundaries of planet Earth. Nuclear weapons threaten the survival of humankind. Organisms are increasingly shaped by intelligent design rather than natural selection. |
The Future |
Intelligent design becomes the basic principle of life? Homo sapiens is replaced by superhumans? |
1. A human handprint made about 30,000 years ago, on the wall of the Chauvet-Pont-d’Arc Cave in southern France. Somebody tried to say, ‘I was here!’
{© ImageBank/Getty Images Israel.}
ABOUT 13.5 BILLION YEARS AGO, MATTER, energy, time and space came into being in what is known as the Big Bang. The story of these fundamental features of our universe is called physics.
About 300,000 years after their appearance, matter and energy started to coalesce into complex structures, called atoms, which then combined into molecules. The story of atoms, molecules and their interactions is called chemistry.
About 3.8 billion years ago, on a planet called Earth, certain molecules combined to form particularly large and intricate structures called organisms. The story of organisms is called biology.
About 70,000 years ago, organisms belonging to the species Homo sapiens started to form even more elaborate structures called cultures. The subsequent development of these human cultures is called history.
Three important revolutions shaped the course of history: the Cognitive Revolution kick-started history about 70,000 years ago. The Agricultural Revolution sped it up about 12,000 years ago. The Scientific Revolution, which got under way only 500 years ago, may well end history and start something completely different. This book tells the story of how these three revolutions have affected humans and their fellow organisms.
There were humans long before there was history. Animals much like modern humans first appeared about 2.5 million years ago. But for countless generations they did not stand out from the myriad other organisms with which they shared their habitats.
On a hike in East Africa 2 million years ago, you might well have encountered a familiar cast of human characters: anxious mothers cuddling their babies and clutches of carefree children playing in the mud; temperamental youths chafing against the dictates of society and weary elders who just wanted to be left in peace; chest-thumping machos trying to impress the local beauty and wise old matriarchs who had already seen it all. These archaic humans loved, played, formed close friendships and competed for status and power – but so did chimpanzees, baboons and elephants. There was nothing special about humans. Nobody, least of all humans themselves, had any inkling that their descendants would one day walk on the moon, split the atom, fathom the genetic code and write history books. The most important thing to know about prehistoric humans is that they were insignificant animals with no more impact on their environment than gorillas, fireflies or jellyfish.
Biologists classify organisms into species. Animals are said to belong to the same species if they tend to mate with each other, giving birth to fertile offspring. Horses and donkeys have a recent common ancestor and share many physical traits. But they show little sexual interest in one another. They will mate if induced to do so – but their offspring, called mules, are sterile. Mutations in donkey DNA can therefore never cross over to horses, or vice versa. The two types of animals are consequently considered two distinct species, moving along separate evolutionary paths. By contrast, a bulldog and a spaniel may look very different, but they are members of the same species, sharing the same DNA pool. They will happily mate and their puppies will grow up to pair off with other dogs and produce more puppies.
Species that evolved from a common ancestor are bunched together under the heading ‘genus’ (plural genera). Lions, tigers, leopards and jaguars are different species within the genus Panthera. Biologists label organisms with a two-part Latin name, genus followed by species. Lions, for example, are called Panthera leo, the species leo of the genus Panthera. Presumably, everyone reading this book is a Homo sapiens – the species sapiens (wise) of the genus Homo (man).
Genera in their turn are grouped into families, such as the cats (lions, cheetahs, house cats), the dogs (wolves, foxes, jackals) and the elephants (elephants, mammoths, mastodons). All members of a family trace their lineage back to a founding matriarch or patriarch. All cats, for example, from the smallest house kitten to the most ferocious lion, share a common feline ancestor who lived about 25 million years ago.
Homo sapiens, too, belongs to a family. This banal fact used to be one of history’s most closely guarded secrets. Homo sapiens long preferred to view itself as set apart from animals, an orphan bereft of family, lacking siblings or cousins, and most importantly, without parents. But that’s just not the case. Like it or not, we are members of a large and particularly noisy family called the great apes. Our closest living relatives include chimpanzees, gorillas and orang-utans. The chimpanzees are the closest. Just 6 million years ago, a single female ape had two daughters. One became the ancestor of all chimpanzees, the other is our own grandmother.
Homo sapiens has kept hidden an even more disturbing secret. Not only do we possess an abundance of uncivilised cousins, once upon a time we had quite a few brothers and sisters as well. We are used to thinking about ourselves as the only humans, because for the last 10,000 years, our species has indeed been the only human species around. Yet the real meaning of the word human is ‘an animal belonging to the genus Homo’, and there used to be many other species of this genus besides Homo sapiens. Moreover, as we shall see in the last chapter of the book, in the not so distant future we might again have to contend with non-sapiens humans. To clarify this point, I will often use the term ‘Sapiens’ to denote members of the species Homo sapiens, while reserving the term ‘human’ to refer to all extant members of the genus Homo.
Humans first evolved in East Africa about 2.5 million years ago from an earlier genus of apes called Australopithecus, which means ‘Southern Ape’. About 2 million years ago, some of these archaic men and women left their homeland to journey through and settle vast areas of North Africa, Europe and Asia. Since survival in the snowy forests of northern Europe required different traits than those needed to stay alive in Indonesia’s steaming jungles, human populations evolved in different directions. The result was several distinct species, to each of which scientists have assigned a pompous Latin name.
2. Our siblings, according to speculative reconstructions: Homo rudolfensis (East Africa); Homo erectus (East Asia); and Homo neanderthalensis (Europe and western Asia). All are humans.
{© Visual/Corbis.}
Humans in Europe and western Asia evolved into Homo neanderthalensis (‘Man from the Neander Valley’), popularly referred to simply as ‘Neanderthals’. Neanderthals, bulkier and more muscular than us Sapiens, were well adapted to the cold climate of Ice Age western Eurasia. The more eastern regions of Asia were populated by Homo erectus, ‘Upright Man’, who survived there for close to 2 million years, making it the most durable human species ever. This record is unlikely to be broken even by our own species. It is doubtful whether Homo sapiens will still be around a thousand years from now, so 2 million years is really out of our league.
On the island of Java, in Indonesia, lived Homo soloensis, ‘Man from the Solo Valley’, who was suited to life in the tropics. On another Indonesian island – the small island of Flores – archaic humans underwent a process of dwarfing. Humans first reached Flores when the sea level was exceptionally low, and the island was easily accessible from the mainland. When the seas rose again, some people were trapped on the island, which was poor in resources. Big people, who need a lot of food, died first. Smaller fellows survived much better. Over the generations, the people of Flores became dwarves. This unique species, known by scientists as Homo floresiensis, reached a maximum height of only 3.5 feet and weighed no more than fifty-five pounds. They were nevertheless able to produce stone tools, and even managed occasionally to hunt down some of the island’s elephants – though, to be fair, the elephants were a dwarf species as well.
In 2010 another lost sibling was rescued from oblivion, when scientists excavating the Denisova Cave in Siberia discovered a fossilised finger bone. Genetic analysis proved that the finger belonged to a previously unknown human species, which was named Homo denisova. Who knows how many lost relatives of ours are waiting to be discovered in other caves, on other islands, and in other climes.
While these humans were evolving in Europe and Asia, evolution in East Africa did not stop. The cradle of humanity continued to nurture numerous new species, such as Homo rudolfensis, ‘Man from Lake Rudolf’, Homo ergaster, ‘Working Man’, and eventually our own species, which we’ve immodestly named Homo sapiens, ‘Wise Man’.
The members of some of these species were massive and others were dwarves. Some were fearsome hunters and others meek plant-gatherers. Some lived only on a single island, while many roamed over continents. But all of them belonged to the genus Homo. They were all human beings.
It’s a common fallacy to envision these species as arranged in a straight line of descent, with Ergaster begetting Erectus, Erectus begetting the Neanderthals, and the Neanderthals evolving into us. This linear model gives the mistaken impression that at any particular moment only one type of human inhabited the earth, and that all earlier species were merely older models of ourselves. The truth is that from about 2 million years ago until around 10,000 years ago, the world was home, at one and the same time, to several human species. And why not? Today there are many species of foxes, bears and pigs. The earth of a hundred millennia ago was walked by at least six different species of man. It’s our current exclusivity, not that multi-species past, that is peculiar – and perhaps incriminating. As we will shortly see, we Sapiens have good reasons to repress the memory of our siblings.
Despite their many differences, all human species share several defining characteristics. Most notably, humans have extraordinarily large brains compared to other animals. Mammals weighing 130 pounds have an average brain size of 12 cubic inches. The earliest men and women, 2.5 million years ago, had brains of about 36 cubic inches. Modern Sapiens sport a brain averaging 73–85 cubic inches. Neanderthal brains were even bigger.
That evolution should select for larger brains may seem to us like, well, a no-brainer. We are so enamoured of our high intelligence that we assume that when it comes to cerebral power, more must be better. But if that were the case, the feline family would also have produced cats who could do calculus, and frogs would by now have launched their own space program. Why are giant brains so rare in the animal kingdom?
The fact is that a jumbo brain is a jumbo drain on the body. It’s not easy to carry around, especially when encased inside a massive skull. It’s even harder to fuel. In Homo sapiens, the brain accounts for about 2–3 per cent of total body weight, but it consumes 25 per cent of the body’s energy when the body is at rest. By comparison, the brains of other apes require only 8 per cent of rest-time energy. Archaic humans paid for their large brains in two ways. Firstly, they spent more time in search of food. Secondly, their muscles atrophied. Like a government diverting money from defence to education, humans diverted energy from biceps to neurons. It’s hardly a foregone conclusion that this is a good strategy for survival on the savannah. A chimpanzee can’t win an argument with a Homo sapiens, but the ape can rip the man apart like a rag doll.
Today our big brains pay off nicely, because we can produce cars and guns that enable us to move much faster than chimps, and shoot them from a safe distance instead of wrestling. But cars and guns are a recent phenomenon. For more than 2 million years, human neural networks kept growing and growing, but apart from some flint knives and pointed sticks, humans had precious little to show for it. What then drove forward the evolution of the massive human brain during those 2 million years? Frankly, we don’t know.
Another singular human trait is that we walk upright on two legs. Standing up, it’s easier to scan the savannah for game or enemies, and arms that are unnecessary for locomotion are freed for other purposes, like throwing stones or signalling. The more things these hands could do, the more successful their owners were, so evolutionary pressure brought about an increasing concentration of nerves and finely tuned muscles in the palms and fingers. As a result, humans can perform very intricate tasks with their hands. In particular, they can produce and use sophisticated tools. The first evidence for tool production dates from about 2.5 million years ago, and the manufacture and use of tools are the criteria by which archaeologists recognise ancient humans.
Yet walking upright has its downside. The skeleton of our primate ancestors developed for millions of years to support a creature that walked on all fours and had a relatively small head. Adjusting to an upright position was quite a challenge, especially when the scaffolding had to support an extra-large cranium. Humankind paid for its lofty vision and industrious hands with backaches and stiff necks.
Women paid extra. An upright gait required narrower hips, constricting the birth canal – and this just when babies’ heads were getting bigger and bigger. Death in childbirth became a major hazard for human females. Women who gave birth earlier, when the infant’s brain and head were still relatively small and supple, fared better and lived to have more children. Natural selection consequently favoured earlier births. And, indeed, compared to other animals, humans are born prematurely, when many of their vital systems are still under-developed. A colt can trot shortly after birth; a kitten leaves its mother to forage on its own when it is just a few weeks old. Human babies are helpless, dependent for many years on their elders for sustenance, protection and education.
This fact has contributed greatly both to humankind’s extraordinary social abilities and to its unique social problems. Lone mothers could hardly forage enough food for their offspring and themselves with needy children in tow. Raising children required constant help from other family members and neighbours. It takes a tribe to raise a human. Evolution thus favoured those capable of forming strong social ties. In addition, since humans are born underdeveloped, they can be educated and socialised to a far greater extent than any other animal. Most mammals emerge from the womb like glazed earthenware emerging from a kiln – any attempt at remoulding will only scratch or break them. Humans emerge from the womb like molten glass from a furnace. They can be spun, stretched and shaped with a surprising degree of freedom. This is why today we can educate our children to become Christian or Buddhist, capitalist or socialist, warlike or peace-loving.
We assume that a large brain, the use of tools, superior learning abilities and complex social structures are huge advantages. It seems self-evident that these have made humankind the most powerful animal on earth. But humans enjoyed all of these advantages for a full 2 million years during which they remained weak and marginal creatures. Thus humans who lived a million years ago, despite their big brains and sharp stone tools, dwelt in constant fear of predators, rarely hunted large game, and subsisted mainly by gathering plants, scooping up insects, stalking small animals, and eating the carrion left behind by other more powerful carnivores.
One of the most common uses of early stone tools was to crack open bones in order to get to the marrow. Some researchers believe this was our original niche. Just as woodpeckers specialise in extracting insects from the trunks of trees, the first humans specialised in extracting marrow from bones. Why marrow? Well, suppose you observe a pride of lions take down and devour a giraffe. You wait patiently until they’re done. But it’s still not your turn because first the hyenas and jackals – and you don’t dare interfere with them – scavenge the leftovers. Only then would you and your band dare approach the carcass, look cautiously left and right – and dig into the edible tissue that remained.
This is a key to understanding our history and psychology. Genus Homo’s position in the food chain was, until quite recently, solidly in the middle. For millions of years, humans hunted smaller creatures and gathered what they could, all the while being hunted by larger predators. It was only 400,000 years ago that several species of man began to hunt large game on a regular basis, and only in the last 100,000 years – with the rise of Homo sapiens – that man jumped to the top of the food chain.
That spectacular leap from the middle to the top had enormous consequences. Other animals at the top of the pyramid, such as lions and sharks, evolved into that position very gradually, over millions of years. This enabled the ecosystem to develop checks and balances that prevent lions and sharks from wreaking too much havoc. As lions became deadlier, so gazelles evolved to run faster, hyenas to cooperate better, and rhinoceroses to be more bad-tempered. In contrast, humankind ascended to the top so quickly that the ecosystem was not given time to adjust. Moreover, humans themselves failed to adjust. Most top predators of the planet are majestic creatures. Millions of years of dominion have filled them with self-confidence. Sapiens by contrast is more like a banana republic dictator. Having so recently been one of the underdogs of the savannah, we are full of fears and anxieties over our position, which makes us doubly cruel and dangerous. Many historical calamities, from deadly wars to ecological catastrophes, have resulted from this over-hasty jump.
A significant step on the way to the top was the domestication of fire. Some human species may have made occasional use of fire as early as 800,000 years ago. By about 300,000 years ago, Homo erectus, Neanderthals and the forefathers of Homo sapiens were using fire on a daily basis. Humans now had a dependable source of light and warmth, and a deadly weapon against prowling lions. Not long afterwards, humans may even have started deliberately to torch their neighbourhoods. A carefully managed fire could turn impassable barren thickets into prime grasslands teeming with game. In addition, once the fire died down, Stone Age entrepreneurs could walk through the smoking remains and harvest charcoaled animals, nuts and tubers.
But the best thing fire did was cook. Foods that humans cannot digest in their natural forms – such as wheat, rice and potatoes – became staples of our diet thanks to cooking. Fire not only changed food’s chemistry, it changed its biology as well. Cooking killed germs and parasites that infested food. Humans also had a far easier time chewing and digesting old favourites such as fruits, nuts, insects and carrion if they were cooked. Whereas chimpanzees spend five hours a day chewing raw food, a single hour suffices for people eating cooked food.
The advent of cooking enabled humans to eat more kinds of food, to devote less time to eating, and to make do with smaller teeth and shorter intestines. Some scholars believe there is a direct link between the advent of cooking, the shortening of the human intestinal tract, and the growth of the human brain. Since long intestines and large brains are both massive energy consumers, it’s hard to have both. By shortening the intestines and decreasing their energy consumption, cooking inadvertently opened the way to the jumbo brains of Neanderthals and Sapiens.1
Fire also opened the first significant gulf between man and the other animals. The power of almost all animals depends on their bodies: the strength of their muscles, the size of their teeth, the breadth of their wings. Though they may harness winds and currents, they are unable to control these natural forces, and are always constrained by their physical design. Eagles, for example, identify thermal columns rising from the ground, spread their giant wings and allow the hot air to lift them upwards. Yet eagles cannot control the location of the columns, and their maximum carrying capacity is strictly proportional to their wingspan.
When humans domesticated fire, they gained control of an obedient and potentially limitless force. Unlike eagles, humans could choose when and where to ignite a flame, and they were able to exploit fire for any number of tasks. Most importantly, the power of fire was not limited by the form, structure or strength of the human body. A single woman with a flint or fire stick could burn down an entire forest in a matter of hours. The domestication of fire was a sign of things to come.
Despite the benefits of fire, 150,000 years ago humans were still marginal creatures. They could now scare away lions, warm themselves during cold nights, and burn down the occasional forest. Yet counting all species together, there were still no more than perhaps a million humans living between the Indonesian archipelago and the Iberian peninsula, a mere blip on the ecological radar.
Our own species, Homo sapiens, was already present on the world stage, but so far it was just minding its own business in a corner of Africa. We don’t know exactly where and when animals that can be classified as Homo sapiens first evolved from some earlier type of humans, but most scientists agree that by 150,000 years ago, East Africa was populated by Sapiens that looked just like us. If one of them turned up in a modern morgue, the local pathologist would notice nothing peculiar. Thanks to the blessings of fire, they had smaller teeth and jaws than their ancestors, whereas they had massive brains, equal in size to ours.
Scientists also agree that about 70,000 years ago, Sapiens from East Africa spread into the Arabian peninsula, and from there they quickly overran the entire Eurasian landmass.
When Homo sapiens landed in Arabia, most of Eurasia was already settled by other humans. What happened to them? There are two conflicting theories. The ‘Interbreeding Theory’ tells a story of attraction, sex and mingling. As the African immigrants spread around the world, they bred with other human populations, and people today are the outcome of this interbreeding.
For example, when Sapiens reached the Middle East and Europe, they encountered the Neanderthals. These humans were more muscular than Sapiens, had larger brains, and were better adapted to cold climes. They used tools and fire, were good hunters, and apparently took care of their sick and infirm. (Archaeologists have discovered the bones of Neanderthals who lived for many years with severe physical handicaps, evidence that they were cared for by their relatives.) Neanderthals are often depicted in caricatures as the archetypical brutish and stupid ‘cave people’, but recent evidence has changed their image.
According to the Interbreeding Theory, when Sapiens spread into Neanderthal lands, Sapiens bred with Neanderthals until the two populations merged. If this is the case, then today’s Eurasians are not pure Sapiens. They are a mixture of Sapiens and Neanderthals. Similarly, when Sapiens reached East Asia, they interbred with the local Erectus, so the Chinese and Koreans are a mixture of Sapiens and Erectus.
The opposing view, called the ‘Replacement Theory’ tells a very different story – one of incompatibility, revulsion, and perhaps even genocide. According to this theory, Sapiens and other humans had different anatomies, and most likely different mating habits and even body odours. They would have had little sexual interest in one another. And even if a Neanderthal Romeo and a Sapiens Juliet fell in love, they could not produce fertile children, because the genetic gulf separating the two populations was already unbridgeable. The two populations remained completely distinct, and when the Neanderthals died out, or were killed off, their genes died with them. According to this view, Sapiens replaced all the previous human populations without merging with them. If that is the case, the lineages of all contemporary humans can be traced back, exclusively, to East Africa, 70,000 years ago. We are all ‘pure Sapiens’.
Map 1. Homo sapiens conquers the globe.
{Maps by Neil Gower}
A lot hinges on this debate. From an evolutionary perspective, 70,000 years is a relatively short interval. If the Replacement Theory is correct, all living humans have roughly the same genetic baggage, and racial distinctions among them are negligible. But if the Interbreeding Theory is right, there might well be genetic differences between Africans, Europeans and Asians that go back hundreds of thousands of years. This is political dynamite, which could provide material for explosive racial theories.
In recent decades the Replacement Theory has been the common wisdom in the field. It had firmer archaeological backing, and was more politically correct (scientists had no desire to open up the Pandora’s box of racism by claiming significant genetic diversity among modern human populations). But that ended in 2010, when the results of a four-year effort to map the Neanderthal genome were published. Geneticists were able to collect enough intact Neanderthal DNA from fossils to make a broad comparison between it and the DNA of contemporary humans. The results stunned the scientific community.
It turned out that 1–4 per cent of the unique human DNA of modern populations in the Middle East and Europe is Neanderthal DNA. That’s not a huge amount, but it’s significant. A second shock came several months later, when DNA extracted from the fossilised finger from Denisova was mapped. The results proved that up to 6 per cent of the unique human DNA of modern Melanesians and Aboriginal Australians is Denisovan DNA.
If these results are valid – and it’s important to keep in mind that further research is under way and may either reinforce or modify these conclusions – the Interbreeders got at least some things right. But that doesn’t mean that the Replacement Theory is completely wrong. Since Neanderthals and Denisovans contributed only a small amount of DNA to our present-day genome, it is impossible to speak of a ‘merger’ between Sapiens and other human species. Although differences between them were not large enough to completely prevent fertile intercourse, they were sufficient to make such contacts very rare.
How then should we understand the biological relatedness of Sapiens, Neanderthals and Denisovans? Clearly, they were not completely different species like horses and donkeys. On the other hand, they were not just different populations of the same species, like bulldogs and spaniels. Biological reality is not black and white. There are also important grey areas. Every two species that evolved from a common ancestor, such as horses and donkeys, were at one time just two populations of the same species, like bulldogs and spaniels. There must have been a point when the two populations were already quite different from one another, but still capable on rare occasions of having sex and producing fertile offspring. Then another mutation severed this last connecting thread, and they went their separate evolutionary ways.
It seems that about 50,000 years ago, Sapiens, Neanderthals and Denisovans were at that borderline point. They were almost, but not quite, entirely separate species. As we shall see in the next chapter, Sapiens were already very different from Neanderthals and Denisovans not only in their genetic code and physical traits, but also in their cognitive and social abilities, yet it appears it was still just possible, on rare occasions, for a Sapiens and a Neanderthal to produce a fertile offspring. So the populations did not merge, but a few lucky Neanderthal genes did hitch a ride on the Sapiens Express. It is unsettling – and perhaps thrilling – to think that we Sapiens could at one time have sex with an animal from a different species, and produce children together.
3. A speculative reconstruction of a Neanderthal child. Genetic evidence hints that at least some Neanderthals may have had fair skin and hair.
{© Anthropologisches Institut und Museum, Universität Zürich.}
But if the Neanderthals, Denisovans and other human species didn’t merge with Sapiens, why did they vanish? One possibility is that Homo sapiens drove them to extinction. Imagine a Sapiens band reaching a Balkan valley where Neanderthals had lived for hundreds of thousands of years. The newcomers began to hunt the deer and gather the nuts and berries that were the Neanderthals’ traditional staples. Sapiens were more proficient hunters and gatherers – thanks to better technology and superior social skills – so they multiplied and spread. The less resourceful Neanderthals found it increasingly difficult to feed themselves. Their population dwindled and they slowly died out, except perhaps for one or two members who joined their Sapiens neighbours.
Another possibility is that competition for resources flared up into violence and genocide. Tolerance is not a Sapiens trademark. In modern times, a small difference in skin colour, dialect or religion has been enough to prompt one group of Sapiens to set about exterminating another group. Would ancient Sapiens have been more tolerant towards an entirely different human species? It may well be that when Sapiens encountered Neanderthals, the result was the first and most significant ethnic-cleansing campaign in history.
Whichever way it happened, the Neanderthals (and the other human species) pose one of history’s great what ifs. Imagine how things might have turned out had the Neanderthals or Denisovans survived alongside Homo sapiens. What kind of cultures, societies and political structures would have emerged in a world where several different human species coexisted? How, for example, would religious faiths have unfolded? Would the book of Genesis have declared that Neanderthals descend from Adam and Eve, would Jesus have died for the sins of the Denisovans, and would the Qur’an have reserved seats in heaven for all righteous humans, whatever their species? Would Neanderthals have been able to serve in the Roman legions, or in the sprawling bureaucracy of imperial China? Would the American Declaration of Independence hold as a self-evident truth that all members of the genus Homo are created equal? Would Karl Marx have urged workers of all species to unite?
Over the past 10,000 years, Homo sapiens has grown so accustomed to being the only human species that it’s hard for us to conceive of any other possibility. Our lack of brothers and sisters makes it easier to imagine that we are the epitome of creation, and that a chasm separates us from the rest of the animal kingdom. When Charles Darwin indicated that Homo sapiens was just another kind of animal, people were outraged. Even today many refuse to believe it. Had the Neanderthals survived, would we still imagine ourselves to be a creature apart? Perhaps this is exactly why our ancestors wiped out the Neanderthals. They were too familiar to ignore, but too different to tolerate.
Whether Sapiens are to blame or not, no sooner had they arrived at a new location than the native population became extinct. The last remains of Homo soloensis are dated to about 50,000 years ago. Homo denisova disappeared shortly thereafter. Neanderthals made their exit roughly 30,000 years ago. The last dwarf-like humans vanished from Flores Island about 12,000 years ago. They left behind some bones, stone tools, a few genes in our DNA and a lot of unanswered questions. They also left behind us, Homo sapiens, the last human species.
What was the Sapiens’ secret of success? How did we manage to settle so rapidly in so many distant and ecologically different habitats? How did we push all other human species into oblivion? Why couldn’t even the strong, brainy, cold-proof Neanderthals survive our onslaught? The debate continues to rage. The most likely answer is the very thing that makes the debate possible: Homo sapiens conquered the world thanks above all to its unique language.
IN THE PREVIOUS CHAPTER WE SAW THAT although Sapiens had already populated East Africa 150,000 years ago, they began to overrun the rest of planet Earth and drive the other human species to extinction only about 70,000 years ago. In the intervening millennia, even though these archaic Sapiens looked just like us and their brains were as big as ours, they did not enjoy any marked advantage over other human species, did not produce particularly sophisticated tools, and did not accomplish any other special feats.
In fact, in the first recorded encounter between Sapiens and Neanderthals, the Neanderthals won. About 100,000 years ago, some Sapiens groups migrated north to the Levant, which was Neanderthal territory, but failed to secure a firm footing. It might have been due to nasty natives, an inclement climate, or unfamiliar local parasites. Whatever the reason, the Sapiens eventually retreated, leaving the Neanderthals as masters of the Middle East.
This poor record of achievement has led scholars to speculate that the internal structure of the brains of these Sapiens was probably different from ours. They looked like us, but their cognitive abilities – learning, remembering, communicating – were far more limited. Teaching such an ancient Sapiens English, persuading him of the truth of Christian dogma, or getting him to understand the theory of evolution would probably have been hopeless undertakings. Conversely, we would have had a very hard time learning his language and understanding his way of thinking.
But then, beginning about 70,000 years ago, Homo sapiens started doing very special things. Around that date Sapiens bands left Africa for a second time. This time they drove the Neanderthals and all other human species not only from the Middle East, but from the face of the earth. Within a remarkably short period, Sapiens reached Europe and East Asia. About 45,000 years ago, they somehow crossed the open sea and landed in Australia – a continent hitherto untouched by humans. The period from about 70,000 years ago to about 30,000 years ago witnessed the invention of boats, oil lamps, bows and arrows and needles (essential for sewing warm clothing). The first objects that can reliably be called art date from this era (see the Stadel lion-man in this chapter), as does the first clear evidence for religion, commerce and social stratification.
Most researchers believe that these unprecedented accomplishments were the product of a revolution in Sapiens’ cognitive abilities. They maintain that the people who drove the Neanderthals to extinction, settled Australia, and carved the Stadel lion-man were as intelligent, creative and sensitive as we are. If we were to come across the artists of the Stadel Cave, we could learn their language and they ours. We’d be able to explain to them everything we know – from the adventures of Alice in Wonderland to the paradoxes of quantum physics – and they could teach us how their people view the world.
The appearance of new ways of thinking and communicating, between 70,000 and 30,000 years ago, constitutes the Cognitive Revolution. What caused it? We’re not sure. The most commonly believed theory argues that accidental genetic mutations changed the inner wiring of the brains of Sapiens, enabling them to think in unprecedented ways and to communicate using an altogether new type of language. We might call it the Tree of Knowledge mutation. Why did it occur in Sapiens DNA rather than in that of Neanderthals? It was a matter of pure chance, as far as we can tell. But it’s more important to understand the consequences of the Tree of Knowledge mutation than its causes. What was so special about the new Sapiens language that it enabled us to conquer the world?*
It was not the first language. Every animal has some kind of language. Even insects, such as bees and ants, know how to communicate in sophisticated ways, informing one another of the whereabouts of food. Neither was it the first vocal language. Many animals, including all ape and monkey species, have vocal languages. For example, green monkeys use calls of various kinds to communicate. Zoologists have identified one call that means, ‘Careful! An eagle!’ A slightly different call warns, ‘Careful! A lion!’ When researchers played a recording of the first call to a group of monkeys, the monkeys stopped what they were doing and looked upwards in fear. When the same group heard a recording of the second call, the lion warning, they quickly scrambled up a tree. Sapiens can produce many more distinct sounds than green monkeys, but whales and elephants have equally impressive abilities. A parrot can say anything Albert Einstein could say, as well as mimicking the sounds of phones ringing, doors slamming and sirens wailing. Whatever advantage Einstein had over a parrot, it wasn’t vocal. What, then, is so special about our language?
The most common answer is that our language is amazingly supple. We can connect a limited number of sounds and signs to produce an infinite number of sentences, each with a distinct meaning. We can thereby ingest, store and communicate a prodigious amount of information about the surrounding world. A green monkey can yell to its comrades, ‘Careful! A lion!’ But a modern human can tell her friends that this morning, near the bend in the river, she saw a lion tracking a herd of bison. She can then describe the exact location, including the different paths leading to the area. With this information, the members of her band can put their heads together and discuss whether they should approach the river, chase away the lion and hunt the bison.
A second theory agrees that our unique language evolved as a means of sharing information about the world. But the most important information that needed to be conveyed was about humans, not about lions and bison. Our language evolved as a way of gossiping. According to this theory Homo sapiens is primarily a social animal. Social cooperation is our key for survival and reproduction. It is not enough for individual men and women to know the whereabouts of lions and bison. It’s much more important for them to know who in their band hates whom, who is sleeping with whom, who is honest, and who is a cheat.
4. An ivory figurine of a ‘lion-man’ (or ‘lioness-woman’) from the Stadel Cave in Germany (c.32,000 years ago). The body is human, but the head is leonine. This is one of the first indisputable examples of art, and probably of religion, and of the ability of the human mind to imagine things that do not really exist.
{Photo: Thomas Stephan © Ulmer Museum.}
The amount of information that one must obtain and store in order to track the ever-changing relationships of even a few dozen individuals is staggering. (In a band of fifty individuals, there are 1,225 one-on-one relationships, and countless more complex social combinations.) All apes show a keen interest in such social information, but they have trouble gossiping effectively. Neanderthals and archaic Homo sapiens probably also had a hard time talking behind each other’s backs – a much maligned ability which is in fact essential for cooperation in large numbers. The new linguistic skills that modern Sapiens acquired about seventy millennia ago enabled them to gossip for hours on end. Reliable information about who could be trusted meant that small bands could expand into larger bands, and Sapiens could develop tighter and more sophisticated types of cooperation.1
The gossip theory might sound like a joke, but numerous studies support it. Even today the vast majority of human communication – whether in the form of emails, phone calls or newspaper columns – is gossip. It comes so naturally to us that it seems as if our language evolved for this very purpose. Do you think that history professors chat about the reasons for World War One when they meet for lunch, or that nuclear physicists spend their coffee breaks at scientific conferences talking about quarks? Sometimes. But more often, they gossip about the professor who caught her husband cheating, or the quarrel between the head of the department and the dean, or the rumours that a colleague used his research funds to buy a Lexus. Gossip usually focuses on wrongdoings. Rumour-mongers are the original fourth estate, journalists who inform society about and thus protect it from cheats and freeloaders.
Most likely, both the gossip theory and the there-is-a-lion-near-the-river theory are valid. Yet the truly unique feature of our language is not its ability to transmit information about men and lions. Rather, it’s the ability to transmit information about things that do not exist at all. As far as we know, only Sapiens can talk about entire kinds of entities that they have never seen, touched or smelled.
Legends, myths, gods and religions appeared for the first time with the Cognitive Revolution. Many animals and human species could previously say, ‘Careful! A lion!’ Thanks to the Cognitive Revolution, Homo sapiens acquired the ability to say, ‘The lion is the guardian spirit of our tribe.’ This ability to speak about fictions is the most unique feature of Sapiens language.
It’s relatively easy to agree that only Homo sapiens can speak about things that don’t really exist, and believe six impossible things before breakfast. You could never convince a monkey to give you a banana by promising him limitless bananas after death in monkey heaven. But why is it important? After all, fiction can be dangerously misleading or distracting. People who go to the forest looking for fairies and unicorns would seem to have less chance of survival than people who go looking for mushrooms and deer. And if you spend hours praying to non-existing guardian spirits, aren’t you wasting precious time, time better spent foraging, fighting and fornicating?
But fiction has enabled us not merely to imagine things, but to do so collectively. We can weave common myths such as the biblical creation story, the Dreamtime myths of Aboriginal Australians, and the nationalist myths of modern states. Such myths give Sapiens the unprecedented ability to cooperate flexibly in large numbers. Ants and bees can also work together in huge numbers, but they do so in a very rigid manner and only with close relatives. Wolves and chimpanzees cooperate far more flexibly than ants, but they can do so only with small numbers of other individuals that they know intimately. Sapiens can cooperate in extremely flexible ways with countless numbers of strangers. That’s why Sapiens rule the world, whereas ants eat our leftovers and chimps are locked up in zoos and research laboratories.
Our chimpanzee cousins usually live in small troops of several dozen individuals. They form close friendships, hunt together and fight shoulder to shoulder against baboons, cheetahs and enemy chimpanzees. Their social structure tends to be hierarchical. The dominant member, who is almost always a male, is termed the ‘alpha male’. Other males and females exhibit their submission to the alpha male by bowing before him while making grunting sounds, not unlike human subjects kowtowing before a king. The alpha male strives to maintain social harmony within his troop. When two individuals fight, he will intervene and stop the violence. Less benevolently, he might monopolise particularly coveted foods and prevent lower-ranking males from mating with the females.
When two males are contesting the alpha position, they usually do so by forming extensive coalitions of supporters, both male and female, from within the group. Ties between coalition members are based on intimate daily contact – hugging, touching, kissing, grooming and mutual favours. Just as human politicians on election campaigns go around shaking hands and kissing babies, so aspirants to the top position in a chimpanzee group spend much time hugging, back-slapping and kissing baby chimps. The alpha male usually wins his position not because he is physically stronger, but because he leads a large and stable coalition. These coalitions play a central part not only during overt struggles for the alpha position, but in almost all day-to-day activities. Members of a coalition spend more time together, share food, and help one another in times of trouble.
There are clear limits to the size of groups that can be formed and maintained in such a way. In order to function, all members of a group must know each other intimately. Two chimpanzees who have never met, never fought, and never engaged in mutual grooming will not know whether they can trust one another, whether it would be worthwhile to help one another, and which of them ranks higher. Under natural conditions, a typical chimpanzee troop consists of about twenty to fifty individuals. As the number of chimpanzees in a troop increases, the social order destabilises, eventually leading to a rupture and the formation of a new troop by some of the animals. Only in a handful of cases have zoologists observed groups larger than a hundred. Separate groups seldom cooperate, and tend to compete for territory and food. Researchers have documented prolonged warfare between groups, and even one case of ‘genocidal’ activity in which one troop systematically slaughtered most members of a neighbouring band.2
Similar patterns probably dominated the social lives of early humans, including archaic Homo sapiens. Humans, like chimps, have social instincts that enabled our ancestors to form friendships and hierarchies, and to hunt or fight together. However, like the social instincts of chimps, those of humans were adapted only for small intimate groups. When the group grew too large, its social order destabilised and the band split. Even if a particularly fertile valley could feed 500 archaic Sapiens, there was no way that so many strangers could live together. How could they agree who should be leader, who should hunt where, or who should mate with whom?
In the wake of the Cognitive Revolution, gossip helped Homo sapiens to form larger and more stable bands. But even gossip has its limits. Sociological research has shown that the maximum ‘natural’ size of a group bonded by gossip is about 150 individuals. Most people can neither intimately know, nor gossip effectively about, more than 150 human beings.
Even today, a critical threshold in human organisations falls somewhere around this magic number. Below this threshold, communities, businesses, social networks and military units can maintain themselves based mainly on intimate acquaintance and rumour-mongering. There is no need for formal ranks, titles and law books to keep order.3 A platoon of thirty soldiers or even a company of a hundred soldiers can function well on the basis of intimate relations, with a minimum of formal discipline. A well-respected sergeant can become ‘king of the company’ and exercise authority even over commissioned officers. A small family business can survive and flourish without a board of directors, a CEO or an accounting department.
But once the threshold of 150 individuals is crossed, things can no longer work that way. You cannot run a division with thousands of soldiers the same way you run a platoon. Successful family businesses usually face a crisis when they grow larger and hire more personnel. If they cannot reinvent themselves, they go bust.
How did Homo sapiens manage to cross this critical threshold, eventually founding cities comprising tens of thousands of inhabitants and empires ruling hundreds of millions? The secret was probably the appearance of fiction. Large numbers of strangers can cooperate successfully by believing in common myths.
Any large-scale human cooperation – whether a modern state, a medieval church, an ancient city or an archaic tribe – is rooted in common myths that exist only in people’s collective imagination. Churches are rooted in common religious myths. Two Catholics who have never met can nevertheless go together on crusade or pool funds to build a hospital because they both believe that God was incarnated in human flesh and allowed Himself to be crucified to redeem our sins. States are rooted in common national myths. Two Serbs who have never met might risk their lives to save one another because both believe in the existence of the Serbian nation, the Serbian homeland and the Serbian flag. Judicial systems are rooted in common legal myths. Two lawyers who have never met can nevertheless combine efforts to defend a complete stranger because they both believe in the existence of laws, justice, human rights – and the money paid out in fees.
Yet none of these things exists outside the stories that people invent and tell one another. There are no gods in the universe, no nations, no money, no human rights, no laws, and no justice outside the common imagination of human beings.
People easily understand that ‘primitives’ cement their social order by believing in ghosts and spirits, and gathering each full moon to dance together around the campfire. What we fail to appreciate is that our modern institutions function on exactly the same basis. Take for example the world of business corporations. Modern business-people and lawyers are, in fact, powerful sorcerers. The principal difference between them and tribal shamans is that modern lawyers tell far stranger tales. The legend of Peugeot affords us a good example.
An icon that somewhat resembles the Stadel lion-man appears today on cars, trucks and motorcycles from Paris to Sydney. It’s the hood ornament that adorns vehicles made by Peugeot, one of the oldest and largest of Europe’s carmakers. Peugeot began as a small family business in the village of Valentigney, just 200 miles from the Stadel Cave. Today the company employs about 200,000 people worldwide, most of whom are complete strangers to each other. These strangers cooperate so effectively that in 2008 Peugeot produced more than 1.5 million automobiles, earning revenues of about 55 billion euros.
In what sense can we say that Peugeot SA (the company’s official name) exists? There are many Peugeot vehicles, but these are obviously not the company. Even if every Peugeot in the world were simultaneously junked and sold for scrap metal, Peugeot SA would not disappear. It would continue to manufacture new cars and issue its annual report. The company owns factories, machinery and showrooms, and employs mechanics, accountants and secretaries, but all these together do not comprise Peugeot. A disaster might kill every single one of Peugeot’s employees, and go on to destroy all of its assembly lines and executive offices. Even then, the company could borrow money, hire new employees, build new factories and buy new machinery. Peugeot has managers and shareholders, but neither do they constitute the company. All the managers could be dismissed and all its shares sold, but the company itself would remain intact.
5. The Peugeot Lion
{© magiccarpics.co.uk.}
It doesn’t mean that Peugeot SA is invulnerable or immortal. If a judge were to mandate the dissolution of the company, its factories would remain standing and its workers, accountants, managers and shareholders would continue to live – but Peugeot SA would immediately vanish. In short, Peugeot SA seems to have no essential connection to the physical world. Does it really exist?
Peugeot is a figment of our collective imagination. Lawyers call this a ‘legal fiction’. It can’t be pointed at; it is not a physical object. But it exists as a legal entity. Just like you or me, it is bound by the laws of the countries in which it operates. It can open a bank account and own property. It pays taxes, and it can be sued and even prosecuted separately from any of the people who own or work for it.
Peugeot belongs to a particular genre of legal fictions called ‘limited liability companies’. The idea behind such companies is among humanity’s most ingenious inventions. Homo sapiens lived for untold millennia without them. During most of recorded history property could be owned only by flesh-and-blood humans, the kind that stood on two legs and had big brains. If in thirteenth-century France Jean set up a wagon-manufacturing workshop, he himself was the business. If a wagon he’d made broke down a week after purchase, the disgruntled buyer would have sued Jean personally. If Jean had borrowed 1,000 gold coins to set up his workshop and the business failed, he would have had to repay the loan by selling his private property – his house, his cow, his land. He might even have had to sell his children into servitude. If he couldn’t cover the debt, he could be thrown in prison by the state or enslaved by his creditors. He was fully liable, without limit, for all obligations incurred by his workshop.
If you had lived back then, you would probably have thought twice before you opened an enterprise of your own. And indeed this legal situation discouraged entrepreneurship. People were afraid to start new businesses and take economic risks. It hardly seemed worth taking the chance that their families could end up utterly destitute.
This is why people began collectively to imagine the existence of limited liability companies. Such companies were legally independent of the people who set them up, or invested money in them, or managed them. Over the last few centuries such companies have become the main players in the economic arena, and we have grown so used to them that we forget they exist only in our imagination. In the US, the technical term for a limited liability company is a ‘corporation’, which is ironic, because the term derives from ‘corpus’ (‘body’ in Latin) – the one thing these corporations lack. Despite their having no real bodies, the American legal system treats corporations as legal persons, as if they were flesh-and-blood human beings.
And so did the French legal system back in 1896, when Armand Peugeot, who had inherited from his parents a metalworking shop that produced springs, saws and bicycles, decided to go into the automobile business. To that end, he set up a limited liability company. He named the company after himself, but it was independent of him. If one of the cars broke down, the buyer could sue Peugeot, but not Armand Peugeot. If the company borrowed millions of francs and then went bust, Armand Peugeot did not owe its creditors a single franc. The loan, after all, had been given to Peugeot, the company, not to Armand Peugeot, the Homo sapiens. Armand Peugeot died in 1915. Peugeot, the company, is still alive and well.
How exactly did Armand Peugeot, the man, create Peugeot, the company? In much the same way that priests and sorcerers have created gods and demons throughout history, and in which thousands of French curés were still creating Christ’s body every Sunday in the parish churches. It all revolved around telling stories, and convincing people to believe them. In the case of the French curés, the crucial story was that of Christ’s life and death as told by the Catholic Church. According to this story, if a Catholic priest dressed in his sacred garments solemnly said the right words at the right moment, mundane bread and wine turned into God’s flesh and blood. The priest exclaimed ‘Hoc est corpus meum! ’ (Latin for ‘This is my body!’) and hocus pocus – the bread turned into Christ’s flesh. Seeing that the priest had properly and assiduously observed all the procedures, millions of devout French Catholics behaved as if God really existed in the consecrated bread and wine.
In the case of Peugeot SA the crucial story was the French legal code, as written by the French parliament. According to the French legislators, if a certified lawyer followed all the proper liturgy and rituals, wrote all the required spells and oaths on a wonderfully decorated piece of paper, and affixed his ornate signature to the bottom of the document, then hocus pocus – a new company was incorporated. When in 1896 Armand Peugeot wanted to create his company, he paid a lawyer to go through all these sacred procedures. Once the lawyer had performed all the right rituals and pronounced all the necessary spells and oaths, millions of upright French citizens behaved as if the Peugeot company really existed.
Telling effective stories is not easy. The difficulty lies not in telling the story, but in convincing everyone else to believe it. Much of history revolves around this question: how does one convince millions of people to believe particular stories about gods, or nations, or limited liability companies? Yet when it succeeds, it gives Sapiens immense power, because it enables millions of strangers to cooperate and work towards common goals. Just try to imagine how difficult it would have been to create states, or churches, or legal systems if we could speak only about things that really exist, such as rivers, trees and lions.
Over the years, people have woven an incredibly complex network of stories. Within this network, fictions such as Peugeot not only exist, but also accumulate immense power. The kinds of things that people create through this network of stories are known in academic circles as ‘fictions’, ‘social constructs’, or ‘imagined realities’. An imagined reality is not a lie. I lie when I say that there is a lion near the river when I know perfectly well that there is no lion there. There is nothing special about lies. Green monkeys and chimpanzees can lie. A green monkey, for example, has been observed calling ‘Careful! A lion!’ when there was no lion around. This alarm conveniently frightened away a fellow monkey who had just found a banana, leaving the liar all alone to steal the prize for itself.
Unlike lying, an imagined reality is something that everyone believes in, and as long as this communal belief persists, the imagined reality exerts force in the world. The sculptor from the Stadel Cave may sincerely have believed in the existence of the lion-man guardian spirit. Some sorcerers are charlatans, but most sincerely believe in the existence of gods and demons. Most millionaires sincerely believe in the existence of money and limited liability companies. Most human-rights activists sincerely believe in the existence of human rights. No one was lying when, in 2011, the UN demanded that the Libyan government respect the human rights of its citizens, even though the UN, Libya and human rights are all figments of our fertile imaginations.
Ever since the Cognitive Revolution, Sapiens have thus been living in a dual reality. On the one hand, the objective reality of rivers, trees and lions; and on the other hand, the imagined reality of gods, nations and corporations. As time went by, the imagined reality became ever more powerful, so that today the very survival of rivers, trees and lions depends on the grace of imagined entities such as the United States and Google.
The ability to create an imagined reality out of words enabled large numbers of strangers to cooperate effectively. But it also did something more. Since large-scale human cooperation is based on myths, the way people cooperate can be altered by changing the myths – by telling different stories. Under the right circumstances myths can change rapidly. In 1789 the French population switched almost overnight from believing in the myth of the divine right of kings to believing in the myth of the sovereignty of the people. Consequently, ever since the Cognitive Revolution Homo sapiens has been able to revise its behaviour rapidly in accordance with changing needs. This opened a fast lane of cultural evolution, bypassing the traffic jams of genetic evolution. Speeding down this fast lane, Homo sapiens soon far outstripped all other human and animal species in its ability to cooperate.
The behaviour of other social animals is determined to a large extent by their genes. DNA is not an autocrat. Animal behaviour is also influenced by environmental factors and individual quirks. Nevertheless, in a given environment, animals of the same species will tend to behave in a similar way. Significant changes in social behaviour cannot occur, in general, without genetic mutations. For example, common chimpanzees have a genetic tendency to live in hierarchical groups headed by an alpha male. Members of a closely related chimpanzee species, bonobos, usually live in more egalitarian groups dominated by female alliances. Female common chimpanzees cannot take lessons from their bonobo relatives and stage a feminist revolution. Male chimps cannot gather in a constitutional assembly to abolish the office of alpha male and declare that from here on out all chimps are to be treated as equals. Such dramatic changes in behaviour would occur only if something changed in the chimpanzees’ DNA.
For similar reasons, archaic humans did not initiate any revolutions. As far as we can tell, changes in social patterns, the invention of new technologies and the settlement of alien habitats resulted from genetic mutations and environmental pressures more than from cultural initiatives. This is why it took humans hundreds of thousands of years to make these steps. Two million years ago, genetic mutations resulted in the appearance of a new human species called Homo erectus. Its emergence was accompanied by the development of a new stone tool technology, now recognised as a defining feature of this species. As long as Homo erectus did not undergo further genetic alterations, its stone tools remained roughly the same – for close to 2 million years!
In contrast, ever since the Cognitive Revolution, Sapiens have been able to change their behaviour quickly, transmitting new behaviours to future generations without any need of genetic or environmental change. As a prime example, consider the repeated appearance of childless elites, such as the Catholic priesthood, Buddhist monastic orders and Chinese eunuch bureaucracies. The existence of such elites goes against the most fundamental principles of natural selection, since these dominant members of society willingly give up procreation. Whereas chimpanzee alpha males use their power to have sex with as many females as possible – and consequently sire a large proportion of their troop’s young – the Catholic alpha male abstains completely from sexual intercourse or raising a family. This abstinence does not result from unique environmental conditions such as a severe lack of food or want of potential mates. Nor is it the result of some quirky genetic mutation. The Catholic Church has survived for centuries, not by passing on a ‘celibacy gene’ from one pope to the next, but by passing on the stories of the New Testament and of Catholic canon law.
In other words, while the behaviour patterns of archaic humans remained fixed for tens of thousands of years, Sapiens could transform their social structures, the nature of their interpersonal relations, their economic activities and a host of other behaviours within a decade or two. Consider a resident of Berlin, born in 1900 and living to the ripe age of one hundred. She spent her childhood in the Hohenzollern Empire of Wilhelm II; her adult years in the Weimar Republic, the Nazi Third Reich and Communist East Germany; and she died a citizen of a democratic and reunified Germany. She had managed to be a part of five very different sociopolitical systems, though her DNA remained exactly the same.
This was the key to Sapiens’ success. In a one-on-one brawl, a Neanderthal would probably have beaten a Sapiens. But in a conflict of hundreds, Neanderthals wouldn’t stand a chance. Neanderthals could share information about the whereabouts of lions, but they probably could not tell – and revise – stories about tribal spirits. Without an ability to compose fiction, Neanderthals were unable to cooperate effectively in large numbers, nor could they adapt their social behaviour to rapidly changing challenges.
While we can’t get inside a Neanderthal mind to understand how they thought, we have indirect evidence of the limits to their cognition compared with their Sapiens rivals. Archaeologists excavating 30,000-year-old Sapiens sites in the European heartland occasionally find there seashells from the Mediterranean and Atlantic coasts. In all likelihood, these shells got to the continental interior through long-distance trade between different Sapiens bands. Neanderthal sites lack any evidence of such trade. Each group manufactured its own tools from local materials.4
6. The Catholic alpha male abstains from sexual intercourse and raising a family, even though there is no genetic or ecological reason for him to do so.
{© Andreas Solaro/AFP/Getty Images.}
Another example comes from the South Pacific. Sapiens bands that lived on the island of New Ireland, north of New Guinea, used a volcanic glass called obsidian to manufacture particularly strong and sharp tools. New Ireland, however, has no natural deposits of obsidian. Laboratory tests revealed that the obsidian they used was brought from deposits on New Britain, an island 250 miles away. Some of the inhabitants of these islands must have been skilled navigators who traded from island to island over long distances.5
Trade may seem a very pragmatic activity, one that needs no fictive basis. Yet the fact is that no animal other than Sapiens engages in trade, and all the Sapiens trade networks about which we have detailed evidence were based on fictions. Trade cannot exist without trust, and it is very difficult to trust strangers. The global trade network of today is based on our trust in such fictional entities as the dollar, the Federal Reserve Bank, and the totemic trademarks of corporations. When two strangers in a tribal society want to trade, they will often establish trust by appealing to a common god, mythical ancestor or totem animal.
If archaic Sapiens believing in such fictions traded shells and obsidian, it stands to reason that they could also have traded information, thus creating a much denser and wider knowledge network than the one that served Neanderthals and other archaic humans.
Hunting techniques provide another illustration of these differences. Neanderthals usually hunted alone or in small groups. Sapiens, on the other hand, developed techniques that relied on cooperation between many dozens of individuals, and perhaps even between different bands. One particularly effective method was to surround an entire herd of animals, such as wild horses, then chase them into a narrow gorge, where it was easy to slaughter them en masse. If all went according to plan, the bands could harvest tons of meat, fat and animal skins in a single afternoon of collective effort, and either consume these riches in a giant potlatch, or dry, smoke or (in Arctic areas) freeze them for later usage. Archaeologists have discovered sites where entire herds were butchered annually in such ways. There are even sites where fences and obstacles were erected in order to create artificial traps and slaughtering grounds.
We may presume that Neanderthals were not pleased to see their traditional hunting grounds turned into Sapiens-controlled slaughterhouses. However, if violence broke out between the two species, Neanderthals were not much better off than wild horses. Fifty Neanderthals cooperating in traditional and static patterns were no match for 500 versatile and innovative Sapiens. And even if the Sapiens lost the first round, they could quickly invent new stratagems that would enable them to win the next time.
New ability |
Wider consequences |
The ability to transmit larger quantities of information about the world surrounding Homo sapiens |
Planning and carrying out complex actions, such as avoiding lions and hunting bison |
The ability to transmit larger quantities of information about Sapiens social relationships |
Larger and more cohesive groups, numbering up to 150 individuals |
The ability to transmit information about things that do not really exist, such as tribal spirits, nations, limited liability companies, and human rights |
a. Cooperation between very large numbers of strangers |
The immense diversity of imagined realities that Sapiens invented, and the resulting diversity of behaviour patterns, are the main components of what we call ‘cultures’. Once cultures appeared, they never ceased to change and develop, and these unstoppable alterations are what we call ‘history’.
The Cognitive Revolution is accordingly the point when history declared its independence from biology. Until the Cognitive Revolution, the doings of all human species belonged to the realm of biology, or, if you so prefer, prehistory (I tend to avoid the term ‘prehistory’, because it wrongly implies that even before the Cognitive Revolution, humans were in a category of their own). From the Cognitive Revolution onwards, historical narratives replace biological theories as our primary means of explaining the development of Homo sapiens. To understand the rise of Christianity or the French Revolution, it is not enough to comprehend the interaction of genes, hormones and organisms. It is necessary to take into account the interaction of ideas, images and fantasies as well.
This does not mean that Homo sapiens and human culture became exempt from biological laws. We are still animals, and our physical, emotional and cognitive abilities are still shaped by our DNA. Our societies are built from the same building blocks as Neanderthal or chimpanzee societies, and the more we examine these building blocks – sensations, emotions, family ties – the less difference we find between us and other apes.
It is, however, a mistake to look for the differences at the level of the individual or the family. One on one, even ten on ten, we are embarrassingly similar to chimpanzees. Significant differences begin to appear only when we cross the threshold of 150 individuals, and when we reach 1,000–2,000 individuals, the differences are astounding. If you tried to bunch together thousands of chimpanzees into Tiananmen Square, Wall Street, the Vatican or the headquarters of the United Nations, the result would be pandemonium. By contrast, Sapiens regularly gather by the thousands in such places. Together, they create orderly patterns – such as trade networks, mass celebrations and political institutions – that they could never have created in isolation. The real difference between us and chimpanzees is the mythical glue that binds together large numbers of individuals, families and groups. This glue has made us the masters of creation.
Of course, we also needed other skills, such as the ability to make and use tools. Yet tool-making is of little consequence unless it is coupled with the ability to cooperate with many others. How is it that we now have intercontinental missiles with nuclear warheads, whereas 30,000 years ago we had only sticks with flint spearheads? Physiologically, there has been no significant improvement in our tool-making capacity over the last 30,000 years. Albert Einstein was far less dexterous with his hands than was an ancient hunter-gatherer. However, our capacity to cooperate with large numbers of strangers has improved dramatically. The ancient flint spearhead was manufactured in minutes by a single person, who relied on the advice and help of a few intimate friends. The production of a modern nuclear warhead requires the cooperation of millions of strangers all over the world – from the workers who mine the uranium ore in the depths of the earth to theoretical physicists who write long mathematical formulas to describe the interactions of subatomic particles.
To summarise the relationship between biology and history after the Cognitive Revolution:
a. Biology sets the basic parameters for the behaviour and capacities of Homo sapiens. The whole of history takes place within the bounds of this biological arena.
b. However, this arena is extraordinarily large, allowing Sapiens to play an astounding variety of games. Thanks to their ability to invent fiction, Sapiens create more and more complex games, which each generation develops and elaborates even further.
c. Consequently, in order to understand how Sapiens behave, we must describe the historical evolution of their actions. Referring only to our biological constraints would be like a radio sportscaster who, attending the World Cup football championships, offers his listeners a detailed description of the playing field rather than an account of what the players are doing.
What games did our Stone Age ancestors play in the arena of history? As far as we know, the people who carved the Stadel lion-man some 30,000 years ago had the same physical, emotional and intellectual abilities we have. What did they do when they woke up in the morning? What did they eat for breakfast – and lunch? What were their societies like? Did they have monogamous relationships and nuclear families? Did they have ceremonies, moral codes, sports contests and religious rituals? Did they fight wars? The next chapter takes a peek behind the curtain of the ages, examining what life was like in the millennia separating the Cognitive Revolution from the Agricultural Revolution.
TO UNDERSTAND OUR NATURE, HISTORY and psychology, we must get inside the heads of our hunter-gatherer ancestors. For nearly the entire history of our species, Sapiens lived as foragers. The past 200 years, during which ever increasing numbers of Sapiens have obtained their daily bread as urban labourers and office workers, and the preceding 10,000 years, during which most Sapiens lived as farmers and herders, are the blink of an eye compared to the tens of thousands of years during which our ancestors hunted and gathered.
The flourishing field of evolutionary psychology argues that many of our present-day social and psychological characteristics were shaped during this long pre-agricultural era. Even today, scholars in this field claim, our brains and minds are adapted to a life of hunting and gathering. Our eating habits, our conflicts and our sexuality are all the result of the way our hunter-gatherer minds interact with our current post-industrial environment, with its mega-cities, aeroplanes, telephones and computers. This environment gives us more material resources and longer lives than those enjoyed by any previous generation, but it often makes us feel alienated, depressed and pressured. To understand why, evolutionary psychologists argue, we need to delve into the hunter-gatherer world that shaped us, the world that we subconsciously still inhabit.
Why, for example, do people gorge on high-calorie food that is doing little good to their bodies? Today’s affluent societies are in the throes of a plague of obesity, which is rapidly spreading to developing countries. It’s a puzzle why we binge on the sweetest and greasiest food we can find, until we consider the eating habits of our forager forebears. In the savannahs and forests they inhabited, high-calorie sweets were extremely rare and food in general was in short supply. A typical forager 30,000 years ago had access to only one type of sweet food – ripe fruit. If a Stone Age woman came across a tree groaning with figs, the most sensible thing to do was to eat as many of them as she could on the spot, before the local baboon band picked the tree bare. The instinct to gorge on high-calorie food was hard-wired into our genes. Today we may be living in high-rise apartments with over-stuffed refrigerators, but our DNA still thinks we are in the savannah. That’s what makes some of us spoon down an entire tub of Ben & Jerry’s when we find one in the freezer and wash it down with a jumbo Coke.
This ‘gorging gene’ theory is widely accepted. Other theories are far more contentious. For example, some evolutionary psychologists argue that ancient foraging bands were not composed of nuclear families centred on monogamous couples. Rather, foragers lived in communes devoid of private property, monogamous relationships and even fatherhood. In such a band, a woman could have sex and form intimate bonds with several men (and women) simultaneously, and all of the band’s adults cooperated in parenting its children. Since no man knew definitively which of the children were his, men showed equal concern for all youngsters.
Such a social structure is not an Aquarian utopia. It’s well documented among animals, notably our closest relatives, the chimpanzees and bonobos. There are even a number of present-day human cultures in which collective fatherhood is practised, as for example among the Barí Indians. According to the beliefs of such societies, a child is not born from the sperm of a single man, but from the accumulation of sperm in a woman’s womb. A good mother will make a point of having sex with several different men, especially when she is pregnant, so that her child will enjoy the qualities (and paternal care) not merely of the best hunter, but also of the best storyteller, the strongest warrior and the most considerate lover. If this sounds silly, bear in mind that before the development of modern embryological studies, people had no solid evidence that babies are always sired by a single father rather than by many.
The proponents of this ‘ancient commune’ theory argue that the frequent infidelities that characterise modern marriages, and the high rates of divorce, not to mention the cornucopia of psychological complexes from which both children and adults suffer, all result from forcing humans to live in nuclear families and monogamous relationships that are incompatible with our biological software.1
Many scholars vehemently reject this theory, insisting that both monogamy and the forming of nuclear families are core human behaviours. Though ancient hunter-gatherer societies tended to be more communal and egalitarian than modern societies, these researchers argue, they were nevertheless comprised of separate cells, each containing a jealous couple and the children they held in common. This is why today monogamous relationships and nuclear families are the norm in the vast majority of cultures, why men and women tend to be very possessive of their partners and children, and why even in modern states such as North Korea and Syria political authority passes from father to son.
In order to resolve this controversy and understand our sexuality, society and politics, we need to learn something about the living conditions of our ancestors, to examine how Sapiens lived between the Cognitive Revolution of 70,000 years ago, and the start of the Agricultural Revolution about 12,000 years ago.
Unfortunately, there are few certainties regarding the lives of our forager ancestors. The debate between the ‘ancient commune’ and ‘eternal monogamy’ schools is based on flimsy evidence. We obviously have no written records from the age of foragers, and the archaeological evidence consists mainly of fossilised bones and stone tools. Artefacts made of more perishable materials – such as wood, bamboo or leather – survive only under unique conditions. The common impression that pre-agricultural humans lived in an age of stone is a misconception based on this archaeological bias. The Stone Age should more accurately be called the Wood Age, because most of the tools used by ancient hunter-gatherers were made of wood.
Any reconstruction of the lives of ancient hunter-gatherers from the surviving artefacts is extremely problematic. One of the most glaring differences between the ancient foragers and their agricultural and industrial descendants is that foragers had very few artefacts to begin with, and these played a comparatively modest role in their lives. Over the course of his or her life, a typical member of a modern affluent society will own several million artefacts – from cars and houses to disposable nappies and milk cartons. There’s hardly an activity, a belief, or even an emotion that is not mediated by objects of our own devising. Our eating habits are mediated by a mind-boggling collection of such items, from spoons and glasses to genetic engineering labs and gigantic ocean-going ships. In play, we use a plethora of toys, from plastic cards to 100,000-seater stadiums. Our romantic and sexual relations are accoutred by rings, beds, nice clothes, sexy underwear, condoms, fashionable restaurants, cheap motels, airport lounges, wedding halls and catering companies. Religions bring the sacred into our lives with Gothic churches, Muslim mosques, Hindu ashrams, Torah scrolls, Tibetan prayer wheels, priestly cassocks, candles, incense, Christmas trees, matzah balls, tombstones and icons.
We hardly notice how ubiquitous our stuff is until we have to move it to a new house. Foragers moved house every month, every week, and sometimes even every day, toting whatever they had on their backs. There were no moving companies, wagons, or even pack animals to share the burden. They consequently had to make do with only the most essential possessions. It’s reasonable to presume, then, that the greater part of their mental, religious and emotional lives was conducted without the help of artefacts. An archaeologist working 100,000 years from now could piece together a reasonable picture of Muslim belief and practice from the myriad objects he unearthed in a ruined mosque. But we are largely at a loss in trying to comprehend the beliefs and rituals of ancient hunter-gatherers. It’s much the same dilemma that a future historian would face if he had to depict the social world of twenty-first-century teenagers solely on the basis of their surviving snail mail – since no records will remain of their phone conversations, emails, blogs and text messages.
A reliance on artefacts will thus bias an account of ancient hunter-gatherer life. One way to remedy this is to look at modern forager societies. These can be studied directly, by anthropological observation. But there are good reasons to be very careful in extrapolating from modern forager societies to ancient ones.
Firstly, all forager societies that have survived into the modern era have been influenced by neighbouring agricultural and industrial societies. Consequently, it’s risky to assume that what is true of them was also true tens of thousands of years ago.
Secondly, modern forager societies have survived mainly in areas with difficult climatic conditions and inhospitable terrain, ill-suited for agriculture. Societies that have adapted to the extreme conditions of places such as the Kalahari Desert in southern Africa may well provide a very misleading model for understanding ancient societies in fertile areas such as the Yangtze River Valley. In particular, population density in an area like the Kalahari Desert is far lower than it was around the ancient Yangtze, and this has far-reaching implications for key questions about the size and structure of human bands and the relations between them.
Thirdly, the most notable characteristic of hunter-gatherer societies is how different they are one from the other. They differ not only from one part of the world to another but even in the same region. One good example is the huge variety the first European settlers found among the Aborigine peoples of Australia. Just before the British conquest, between 300,000 and 700,000 hunter-gatherers lived on the continent in 200–600 tribes, each of which was further divided into several bands.2 Each tribe had its own language, religion, norms and customs. Living around what is now Adelaide in southern Australia were several patrilineal clans that reckoned descent from the father’s side. These clans bonded together into tribes on a strictly territorial basis. In contrast, some tribes in northern Australia gave more importance to a person’s maternal ancestry, and a person’s tribal identity depended on his or her totem rather than his territory.
It stands to reason that the ethnic and cultural variety among ancient hunter-gatherers was equally impressive, and that the 5 million to 8 million foragers who populated the world on the eve of the Agricultural Revolution were divided into thousands of separate tribes with thousands of different languages and cultures.3 This, after all, was one of the main legacies of the Cognitive Revolution. Thanks to the appearance of fiction, even people with the same genetic make-up who lived under similar ecological conditions were able to create very different imagined realities, which manifested themselves in different norms and values.
For example, there’s every reason to believe that a forager band that lived 30,000 years ago on the spot where Oxford University now stands would have spoken a different language from one living where Cambridge is now situated. One band might have been belligerent and the other peaceful. Perhaps the Cambridge band was communal while the one at Oxford was based on nuclear families. The Cantabrigians might have spent long hours carving wooden statues of their guardian spirits, whereas the Oxonians may have worshipped through dance. The former perhaps believed in reincarnation, while the latter thought this was nonsense. In one society, homosexual relationships might have been accepted, while in the other they were taboo.
In other words, while anthropological observations of modern foragers can help us understand some of the possibilities available to ancient foragers, the ancient horizon of possibilities was much broader, and most of it is hidden from our view.* The heated debates about Homo sapiens’ ‘natural way of life’ miss the main point. Ever since the Cognitive Revolution, there hasn’t been a single natural way of life for Sapiens. There are only cultural choices, from among a bewildering palette of possibilities.
What generalisations can we make about life in the pre-agricultural world nevertheless? It seems safe to say that the vast majority of people lived in small bands numbering several dozen or at most several hundred individuals, and that all these individuals were humans. It is important to note this last point, because it is far from obvious. Most members of agricultural and industrial societies are domesticated animals. They are not equal to their masters, of course, but they are members all the same. Today, the society called New Zealand is composed of 4.5 million Sapiens and 50 million sheep.
There was just one exception to this general rule: the dog. The dog was the first animal domesticated by Homo sapiens, and this occurred before the Agricultural Revolution. Experts disagree about the exact date, but we have incontrovertible evidence of domesticated dogs from about 15,000 years ago. They may have joined the human pack thousands of years earlier.
Dogs were used for hunting and fighting, and as an alarm system against wild beasts and human intruders. With the passing of generations, the two species co-evolved to communicate well with each other. Dogs that were most attentive to the needs and feelings of their human companions got extra care and food, and were more likely to survive. Simultaneously, dogs learned to manipulate people for their own needs. A 15,000-year bond has yielded a much deeper understanding and affection between humans and dogs than between humans and any other animal.4 In some cases dead dogs were even buried ceremoniously, much like humans.
Members of a band knew each other very intimately, and were surrounded throughout their lives by friends and relatives. Loneliness and privacy were rare. Neighbouring bands probably competed for resources and even fought one another, but they also had friendly contacts. They exchanged members, hunted together, traded rare luxuries, cemented political alliances and celebrated religious festivals. Such cooperation was one of the important trademarks of Homo sapiens, and gave it a crucial edge over other human species. Sometimes relations with neighbouring bands were tight enough that together they constituted a single tribe, sharing a common language, common myths, and common norms and values.
Yet we should not overestimate the importance of such external relations. Even if in times of crisis neighbouring bands drew closer together, and even if they occasionally gathered to hunt or feast together, they still spent the vast majority of their time in complete isolation and independence. Trade was mostly limited to prestige items such as shells, amber and pigments. There is no evidence that people traded staple goods like fruits and meat, or that the existence of one band depended on the importing of goods from another. Sociopolitical relations, too, tended to be sporadic. The tribe did not serve as a permanent political framework, and even if it had seasonal meeting places, there were no permanent towns or institutions. The average person lived many months without seeing or hearing a human from outside of her own band, and she encountered throughout her life no more than a few hundred humans. The Sapiens population was thinly spread over vast territories. Before the Agricultural Revolution, the human population of the entire planet was smaller than that of today’s Cairo.
The Upper Galilee Museum of Prehistory
7. First pet? A 12,000-year-old tomb found in northern Israel. It contains the skeleton of a fifty-year-old woman next to that of a puppy (bottom left corner). The puppy was buried close to the woman’s head. Her left hand is resting on the dog in a way that might indicate an emotional connection. There are, of course, other possible explanations. Perhaps, for example, the puppy was a gift to the gatekeeper of the next world.
{Photo: The Upper Galilee Museum of Prehistory.}
Most Sapiens bands lived on the road, roaming from place to place in search of food. Their movements were influenced by the changing seasons, the annual migrations of animals and the growth cycles of plants. They usually travelled back and forth across the same home territory, an area of between several dozen and many hundreds of square miles.
Occasionally, bands wandered outside their turf and explored new lands, whether due to natural calamities, violent conflicts, demographic pressures or the initiative of a charismatic leader. These wanderings were the engine of human worldwide expansion. If a forager band split once every forty years and its splinter group migrated to a new territory sixty miles to the east, the distance from East Africa to China would have been covered in about 10,000 years.
In some exceptional cases, when food sources were particularly rich, bands settled down in seasonal and even permanent camps. Techniques for drying, smoking and freezing food also made it possible to stay put for longer periods. Most importantly, alongside seas and rivers rich in seafood and waterfowl, humans set up permanent fishing villages – the first permanent settlements in history, long predating the Agricultural Revolution. Fishing villages might have appeared on the coasts of Indonesian islands as early as 45,000 years ago. These may have been the base from which Homo sapiens launched its first transoceanic enterprise: the invasion of Australia.
In most habitats, Sapiens bands fed themselves in an elastic and opportunistic fashion. They scrounged for termites, picked berries, dug for roots, stalked rabbits and hunted bison and mammoth. Notwithstanding the popular image of ‘man the hunter’, gathering was Sapiens’ main activity, and it provided most of their calories, as well as raw materials such as flint, wood and bamboo.
Sapiens did not forage only for food and materials. They foraged for knowledge as well. To survive, they needed a detailed mental map of their territory. To maximise the efficiency of their daily search for food, they required information about the growth patterns of each plant and the habits of each animal. They needed to know which foods were nourishing, which made you sick, and how to use others as cures. They needed to know the progress of the seasons and what warning signs preceded a thunderstorm or a dry spell. They studied every stream, every walnut tree, every bear cave, and every flint-stone deposit in their vicinity. Each individual had to understand how to make a stone knife, how to mend a torn cloak, how to lay a rabbit trap, and how to face avalanches, snakebites or hungry lions. Mastery of each of these many skills required years of apprenticeship and practice. The average ancient forager could turn a flint stone into a spear point within minutes. When we try to imitate this feat, we usually fail miserably. Most of us lack expert knowledge of the flaking properties of flint and basalt and the fine motor skills needed to work them precisely.
In other words, the average forager had wider, deeper and more varied knowledge of her immediate surroundings than most of her modern descendants. Today, most people in industrial societies don’t need to know much about the natural world in order to survive. What do you really need to know in order to get by as a computer engineer, an insurance agent, a history teacher or a factory worker? You need to know a lot about your own tiny field of expertise, but for the vast majority of life’s necessities you rely blindly on the help of other experts, whose own knowledge is also limited to a tiny field of expertise. The human collective knows far more today than did the ancient bands. But at the individual level, ancient foragers were the most knowledgeable and skilful people in history.
There is some evidence that the size of the average Sapiens brain has actually decreased since the age of foraging.5 Survival in that era required superb mental abilities from everyone. When agriculture and industry came along people could increasingly rely on the skills of others for survival, and new ‘niches for imbeciles’ were opened up. You could survive and pass your unremarkable genes to the next generation by working as a water carrier or an assembly-line worker.
Foragers mastered not only the surrounding world of animals, plants and objects, but also the internal world of their own bodies and senses. They listened to the slightest movement in the grass to learn whether a snake might be lurking there. They carefully observed the foliage of trees in order to discover fruits, beehives and bird nests. They moved with a minimum of effort and noise, and knew how to sit, walk and run in the most agile and efficient manner. Varied and constant use of their bodies made them as fit as marathon runners. They had physical dexterity that people today are unable to achieve even after years of practising yoga or t’ai chi.
The hunter-gatherer way of life differed significantly from region to region and from season to season, but on the whole foragers seem to have enjoyed a more comfortable and rewarding lifestyle than most of the peasants, shepherds, labourers and office clerks who followed in their footsteps.
While people in today’s affluent societies work an average of forty to forty-five hours a week, and people in the developing world work sixty and even eighty hours a week, hunter-gatherers living today in the most inhospitable of habitats – such as the Kalahari Desert – work on average for just thirty-five to forty-five hours a week. They hunt only one day out of three, and gathering takes up just three to six hours daily. In normal times, this is enough to feed the band. It may well be that ancient hunter-gatherers living in zones more fertile than the Kalahari spent even less time obtaining food and raw materials. On top of that, foragers enjoyed a lighter load of household chores. They had no dishes to wash, no carpets to vacuum, no floors to polish, no nappies to change and no bills to pay.
The forager economy provided most people with more interesting lives than agriculture or industry do. Today, a Chinese factory hand leaves home around seven in the morning, makes her way through polluted streets to a sweatshop, and there operates the same machine, in the same way, day in, day out, for ten long and mind-numbing hours, returning home around seven in the evening in order to wash dishes and do the laundry. Thirty thousand years ago, a Chinese forager might leave camp with her companions at, say, eight in the morning. They’d roam the nearby forests and meadows, gathering mushrooms, digging up edible roots, catching frogs and occasionally running away from tigers. By early afternoon, they were back at the camp to make lunch. That left them plenty of time to gossip, tell stories, play with the children and just hang out. Of course the tigers sometimes caught them, or a snake bit them, but on the other hand they didn’t have to deal with automobile accidents and industrial pollution.
In most places and at most times, foraging provided ideal nutrition. That is hardly surprising – this had been the human diet for hundreds of thousands of years, and the human body was well adapted to it. Evidence from fossilised skeletons indicates that ancient foragers were less likely to suffer from starvation or malnutrition, and were generally taller and healthier than their peasant descendants. Average life expectancy was apparently just thirty to forty years, but this was due largely to the high incidence of child mortality. Children who made it through the perilous first years had a good chance of reaching the age of sixty, and some even made it to their eighties. Among modern foragers, forty-five-year-old women can expect to live another twenty years, and about 5–8 per cent of the population is over sixty.6
The foragers’ secret of success, which protected them from starvation and malnutrition, was their varied diet. Farmers tend to eat a very limited and unbalanced diet. Especially in premodern times, most of the calories feeding an agricultural population came from a single crop – such as wheat, potatoes or rice – that lacks some of the vitamins, minerals and other nutritional materials humans need. The typical peasant in traditional China ate rice for breakfast, rice for lunch, and rice for dinner. If she were lucky, she could expect to eat the same on the following day. By contrast, ancient foragers regularly ate dozens of different foodstuffs. The peasant’s ancient ancestor, the forager, may have eaten berries and mushrooms for breakfast; fruits, snails and turtle for lunch; and rabbit steak with wild onions for dinner. Tomorrow’s menu might have been completely different. This variety ensured that the ancient foragers received all the necessary nutrients.
Furthermore, by not being dependent on any single kind of food, they were less liable to suffer when one particular food source failed. Agricultural societies are ravaged by famine when drought, fire or earthquake devastates the annual rice or potato crop. Forager societies were hardly immune to natural disasters, and suffered from periods of want and hunger, but they were usually able to deal with such calamities more easily. If they lost some of their staple foodstuffs, they could gather or hunt other species, or move to a less affected area.
Ancient foragers also suffered less from infectious diseases. Most of the infectious diseases that have plagued agricultural and industrial societies (such as smallpox, measles and tuberculosis) originated in domesticated animals and were transferred to humans only after the Agricultural Revolution. Ancient foragers, who had domesticated only dogs, were free of these scourges. Moreover, most people in agricultural and industrial societies lived in dense, unhygienic permanent settlements – ideal hotbeds for disease. Foragers roamed the land in small bands that could not sustain epidemics.
The wholesome and varied diet, the relatively short working week, and the rarity of infectious diseases have led many experts to define pre-agricultural forager societies as ‘the original affluent societies’. It would be a mistake, however, to idealise the lives of these ancients. Though they lived better lives than most people in agricultural and industrial societies, their world could still be harsh and unforgiving. Periods of want and hardship were not uncommon, child mortality was high, and an accident which would be minor today could easily become a death sentence. Most people probably enjoyed the close intimacy of the roaming band, but those unfortunates who incurred the hostility or mockery of their fellow band members probably suffered terribly. Modern foragers occasionally abandon and even kill old or disabled people who cannot keep up with the band. Unwanted babies and children may be slain, and there are even cases of religiously inspired human sacrifice.
The Aché people, hunter-gatherers who lived in the jungles of Paraguay until the 1960s, offer a glimpse into the darker side of foraging. When a valued band member died, the Aché customarily killed a little girl and buried the two together. Anthropologists who interviewed the Aché recorded a case in which a band abandoned a middle-aged man who fell sick and was unable to keep up with the others. He was left under a tree. Vultures perched above him, expecting a hearty meal. But the man recuperated, and, walking briskly, he managed to rejoin the band. His body was covered with the birds’ faeces, so he was henceforth nicknamed ‘Vulture Droppings’.
When an old Aché woman became a burden to the rest of the band, one of the younger men would sneak behind her and kill her with an axe-blow to the head. An Aché man told the inquisitive anthropologists stories of his prime years in the jungle. ‘I customarily killed old women. I used to kill my aunts . . . The women were afraid of me . . . Now, here with the whites, I have become weak.’ Babies born without hair, who were considered underdeveloped, were killed immediately. One woman recalled that her first baby girl was killed because the men in the band did not want another girl. On another occasion a man killed a small boy because he was ‘in a bad mood and the child was crying’. Another child was buried alive because ‘it was funny-looking and the other children laughed at it’.7
We should be careful, though, not to judge the Aché too quickly. Anthropologists who lived with them for years report that violence between adults was very rare. Both women and men were free to change partners at will. They smiled and laughed constantly, had no leadership hierarchy, and generally shunned domineering people. They were extremely generous with their few possessions, and were not obsessed with success or wealth. The things they valued most in life were good social interactions and high-quality friendships.8 They viewed the killing of children, sick people and the elderly as many people today view abortion and euthanasia. It should also be noted that the Aché were hunted and killed without mercy by Paraguayan farmers. The need to evade their enemies probably caused the Aché to adopt an exceptionally harsh attitude towards anyone who might become a liability to the band.
The truth is that Aché society, like every human society, was very complex. We should beware of demonising or idealising it on the basis of a superficial acquaintance. The Aché were neither angels nor fiends – they were humans. So, too, were the ancient hunter-gatherers.
What can we say about the spiritual and mental life of the ancient hunter-gatherers? The basics of the forager economy can be reconstructed with some confidence based on quantifiable and objective factors. For example, we can calculate how many calories per day a person needed in order to survive, how many calories were obtained from a pound of walnuts, and how many walnuts could be gathered from a square mile of forest. With this data, we can make an educated guess about the relative importance of walnuts in their diet.
But did they consider walnuts a delicacy or a humdrum staple? Did they believe that walnut trees were inhabited by spirits? Did they find walnut leaves pretty? If a forager boy wanted to take a forager girl to a romantic spot, did the shade of a walnut tree suffice? The world of thought, belief and feeling is by definition far more difficult to decipher.
Most scholars agree that animistic beliefs were common among ancient foragers. Animism (from ‘anima’, ‘soul’ or ‘spirit’ in Latin) is the belief that almost every place, every animal, every plant and every natural phenomenon has awareness and feelings, and can communicate directly with humans. Thus, animists may believe that the big rock at the top of the hill has desires and needs. The rock might be angry about something that people did and rejoice over some other action. The rock might admonish people or ask for favours. Humans, for their part, can address the rock, to mollify or threaten it. Not only the rock, but also the oak tree at the bottom of the hill is an animated being, and so is the stream flowing below the hill, the spring in the forest clearing, the bushes growing around it, the path to the clearing, and the field mice, wolves and crows that drink there. In the animist world, objects and living things are not the only animated beings. There are also immaterial entities – the spirits of the dead, and friendly and malevolent beings, the kind that we today call demons, fairies and angels.
Animists believe that there is no barrier between humans and other beings. They can all communicate directly through speech, song, dance and ceremony. A hunter may address a herd of deer and ask that one of them sacrifice itself. If the hunt succeeds, the hunter may ask the dead animal to forgive him. When someone falls sick, a shaman can contact the spirit that caused the sickness and try to pacify it or scare it away. If need be, the shaman may ask for help from other spirits. What characterises all these acts of communication is that the entities being addressed are local beings. They are not universal gods, but rather a particular deer, a particular tree, a particular stream, a particular ghost.
Just as there is no barrier between humans and other beings, neither is there a strict hierarchy. Non-human entities do not exist merely to provide for the needs of man. Nor are they all-powerful gods who run the world as they wish. The world does not revolve around humans or around any other particular group of beings.
Animism is not a specific religion. It is a generic name for thousands of very different religions, cults and beliefs. What makes all of them ‘animist’ is this common approach to the world and to man’s place in it. Saying that ancient foragers were probably animists is like saying that premodern agriculturists were mostly theists. Theism (from ‘theos’, ‘god’ in Greek) is the view that the universal order is based on a hierarchical relationship between humans and a small group of ethereal entities called gods. It is certainly true to say that premodern agriculturists tended to be theists, but it does not teach us much about the particulars. The generic rubric ‘theists’ covers Jewish rabbis from eighteenth-century Poland, witch-burning Puritans from seventeenth-century Massachusetts, Aztec priests from fifteenth-century Mexico, Sufi mystics from twelfth-century Iran, tenth-century Viking warriors, second-century Roman legionnaires, and first-century Chinese bureaucrats. Each of these viewed the others’ beliefs and practices as weird and heretical. The differences between the beliefs and practices of groups of ‘animistic’ foragers were probably just as big. Their religious experience may have been turbulent and filled with controversies, reforms and revolutions.
But these cautious generalisations are about as far as we can go. Any attempt to describe the specifics of archaic spirituality is highly speculative, as there is next to no evidence to go by and the little evidence we have – a handful of artefacts and cave paintings – can be interpreted in myriad ways. The theories of scholars who claim to know what the foragers felt shed much more light on the prejudices of their authors than on Stone Age religions.
Instead of erecting mountains of theory over a molehill of tomb relics, cave paintings and bone statuettes, it is better to be frank and admit that we have only the haziest notions about the religions of ancient foragers. We assume that they were animists, but that’s not very informative. We don’t know which spirits they prayed to, which festivals they celebrated, or which taboos they observed. Most importantly, we don’t know what stories they told. It’s one of the biggest holes in our understanding of human history.
The sociopolitical world of the foragers is another area about which we know next to nothing. As explained above, scholars cannot even agree on the basics, such as the existence of private property, nuclear families and monogamous relationships. It’s likely that different bands had different structures. Some may have been as hierarchical, tense and violent as the nastiest chimpanzee group, while others were as laid-back, peaceful and lascivious as a bunch of bonobos.
8. A painting from Lascaux Cave, c.15,000–20,000 years ago. What exactly do we see, and what is the painting’s meaning? Some argue that we see a man with the head of a bird and an erect penis, being killed by a bison. Beneath the man is another bird which might symbolise the soul, released from the body at the moment of death. If so, the picture depicts not a prosaic hunting accident, but rather the passage from this world to the next. But we have no way of knowing whether any of these speculations are true. It’s a Rorschach test that reveals much about the preconceptions of modern scholars, and little about the beliefs of ancient foragers.
{© Visual/Corbis.}
In Sungir, Russia, archaeologists discovered in 1955 a 30,000-year-old burial site belonging to a mammoth-hunting culture. In one grave they found the skeleton of a fifty-year-old man, covered with strings of mammoth ivory beads, containing about 3,000 beads in total. On the dead man’s head was a hat decorated with fox teeth, and on his wrists twenty-five ivory bracelets. Other graves from the same site contained far fewer goods. Scholars deduced that the Sungir mammoth-hunters lived in a hierarchical society, and that the dead man was perhaps the leader of a band or of an entire tribe comprising several bands. It is unlikely that a few dozen members of a single band could have produced so many grave goods by themselves.
9. Hunter-gatherers made these handprints about 9,000 years ago in the ‘Hands Cave’, in Argentina. It looks as if these long-dead hands are reaching towards us from within the rock. This is one of the most moving relics of the ancient forager world – but nobody knows what it means.
{© Visual/Corbis.}
Archaeologists then discovered an even more interesting tomb. It contained two skeletons, buried head to head. One belonged to a boy aged about twelve or thirteen, and the other to a girl of about nine or ten. The boy was covered with 5,000 ivory beads. He wore a fox-tooth hat and a belt with 250 fox teeth (at least sixty foxes had to have their teeth pulled to get that many). The girl was adorned with 5,250 ivory beads. Both children were surrounded by statuettes and various ivory objects. A skilled craftsman (or craftswoman) probably needed about forty-five minutes to prepare a single ivory bead. In other words, fashioning the 10,000 ivory beads that covered the two children, not to mention the other objects, required some 7,500 hours of delicate work, well over three years of labour by an experienced artisan!
It is highly unlikely that at such a young age the Sungir children had proved themselves as leaders or mammoth-hunters. Only cultural beliefs can explain why they received such an extravagant burial. One theory is that they owed their rank to their parents. Perhaps they were the children of the leader, in a culture that believed in either family charisma or strict rules of succession. According to a second theory, the children had been identified at birth as the incarnations of some long-dead spirits. A third theory argues that the children’s burial reflects the way they died rather than their status in life. They were ritually sacrificed – perhaps as part of the burial rites of the leader – and then entombed with pomp and circumstance.9
Whatever the correct answer, the Sungir children are among the best pieces of evidence that 30,000 years ago Sapiens could invent sociopolitical codes that went far beyond the dictates of our DNA and the behaviour patterns of other human and animal species.
Finally, there’s the thorny question of the role of war in forager societies. Some scholars imagine ancient hunter-gatherer societies as peaceful paradises, and argue that war and violence began only with the Agricultural Revolution, when people started to accumulate private property. Other scholars maintain that the world of the ancient foragers was exceptionally cruel and violent. Both schools of thought are castles in the air, connected to the ground by the thin strings of meagre archaeological remains and anthropological observations of present-day foragers.
The anthropological evidence is intriguing but very problematic. Foragers today live mainly in isolated and inhospitable areas such as the Arctic or the Kalahari, where population density is very low and opportunities to fight other people are limited. Moreover, in recent generations, foragers have been increasingly subject to the authority of modern states, which prevent the eruption of large-scale conflicts. European scholars have had only two opportunities to observe large and relatively dense populations of independent foragers: in north-western North America in the nineteenth century, and in northern Australia during the nineteenth and early twentieth centuries. Both Amerindian and Aboriginal Australian cultures witnessed frequent armed conflicts. It is debatable, however, whether this represents a ‘timeless’ condition or the impact of European imperialism.
The archaeological findings are both scarce and opaque. What telltale clues might remain of any war that took place tens of thousands of years ago? There were no fortifications and walls back then, no artillery shells or even swords and shields. An ancient spear point might have been used in war, but it could have been used in a hunt as well. Fossilised human bones are no less hard to interpret. A fracture might indicate a war wound or an accident. Nor is the absence of fractures and cuts on an ancient skeleton conclusive proof that the person to whom the skeleton belonged did not die a violent death. Death can be caused by trauma to soft tissues that leaves no marks on bone. Even more importantly, during pre-industrial warfare more than 90 percent of war dead were killed by starvation, cold and disease rather than by weapons. Imagine that 30,000 years ago one tribe defeated its neighbour and expelled it from coveted foraging grounds. In the decisive battle, ten members of the defeated tribe were killed. In the following year, another hundred members of the losing tribe died from starvation, cold and disease. Archaeologists who come across these 110 skeletons may too easily conclude that most fell victim to some natural disaster. How would we be able to tell that they were all victims of a merciless war?
Duly warned, we can now turn to the archaeological findings. In Portugal, a survey was made of 400 skeletons from the period immediately predating the Agricultural Revolution. Only two skeletons showed clear marks of violence. A similar survey of 400 skeletons from the same period in Israel discovered a single crack in a single skull that could be attributed to human violence. A third survey of 400 skeletons from various pre-agricultural sites in the Danube Valley found evidence of violence on eighteen skeletons. Eighteen out of 400 may not sound like a lot, but it’s actually a very high percentage. If all eighteen indeed died violently, it means that about 4.5 per cent of deaths in the ancient Danube Valley were caused by human violence. Today, the global average is only 1.5 per cent, taking war and crime together. During the twentieth century, only 5 per cent of human deaths resulted from human violence – and this in a century that saw the bloodiest wars and most massive genocides in history. If this revelation is typical, the ancient Danube Valley was as violent as the twentieth century.*
The depressing findings from the Danube Valley are supported by a string of equally depressing findings from other areas. At Jabl Sahaba in Sudan, a 12,000-year-old cemetery containing fifty-nine skeletons was discovered. Arrowheads and spear points were found embedded in or lying near the bones of twenty-four skeletons, 40 per cent of the find. The skeleton of one woman revealed twelve injuries. In Ofnet Cave in Bavaria, archaeologists discovered the remains of thirty-eight foragers, mainly women and children, who had been thrown into two burial pits. Half the skeletons, including those of children and babies, bore clear signs of damage by human weapons such as clubs and knives. The few skeletons belonging to mature males bore the worst marks of violence. In all probability, an entire forager band was massacred at Ofnet.
Which better represents the world of the ancient foragers: the peaceful skeletons from Israel and Portugal, or the abattoirs of Jabl Sahaba and Ofnet? The answer is neither. Just as foragers exhibited a wide array of religions and social structures, so, too, did they probably demonstrate a variety of violence rates. While some areas and some periods of time may have enjoyed peace and tranquillity, others were riven by ferocious conflicts.10
If the larger picture of ancient forager life is hard to reconstruct, particular events are largely irretrievable. When a Sapiens band first entered a valley inhabited by Neanderthals, the following years might have witnessed a breathtaking historical drama. Unfortunately, nothing would have survived from such an encounter except, at best, a few fossilised bones and a handful of stone tools that remain mute under the most intense scholarly inquisitions. We may extract from them information about human anatomy, human technology, human diet, and perhaps even human social structure. But they reveal nothing about the political alliance forged between neighbouring Sapiens bands, about the spirits of the dead that blessed this alliance, or about the ivory beads secretly given to the local witch doctor in order to secure the blessing of the spirits.
This curtain of silence shrouds tens of thousands of years of history. These long millennia may well have witnessed wars and revolutions, ecstatic religious movements, profound philosophical theories, incomparable artistic masterpieces. The foragers may have had their all-conquering Napoleons, who ruled empires half the size of Luxembourg; gifted Beethovens who lacked symphony orchestras but brought people to tears with the sound of their bamboo flutes; and charismatic prophets who revealed the words of a local oak tree rather than those of a universal creator god. But these are all mere guesses. The curtain of silence is so thick that we cannot even be sure such things occurred – let alone describe them in detail.
Scholars tend to ask only those questions that they can reasonably expect to answer. Without the discovery of as yet unavailable research tools, we will probably never know what the ancient foragers believed or what political dramas they experienced. Yet it is vital to ask questions for which no answers are available, otherwise we might be tempted to dismiss 60,000 of 70,000 years of human history with the excuse that ‘the people who lived back then did nothing of importance’.
The truth is that they did a lot of important things. In particular, they shaped the world around us to a much larger degree than most people realise. Trekkers visiting the Siberian tundra, the deserts of central Australia and the Amazonian rainforest believe that they have entered pristine landscapes, virtually untouched by human hands. But that’s an illusion. The foragers were there before us and they brought about dramatic changes even in the densest jungles and the most desolate wildernesses. The next chapter explains how the foragers completely reshaped the ecology of our planet long before the first agricultural village was built. The wandering bands of storytelling Sapiens were the most important and most destructive force the animal kingdom had ever produced.
PRIOR TO THE COGNITIVE REVOLUTION, humans of all species lived exclusively on the Afro-Asian landmass. True, they had settled a few islands by swimming short stretches of water or crossing them on improvised rafts. Flores, for example, was colonised as far back as 850,000 years ago. Yet they were unable to venture into the open sea, and none reached America, Australia, or remote islands such as Madagascar, New Zealand and Hawaii.
The sea barrier prevented not just humans but also many other Afro-Asian animals and plants from reaching this ‘Outer World’. As a result, the organisms of distant lands like Australia and Madagascar evolved in isolation for millions upon millions of years, taking on shapes and natures very different from those of their distant Afro-Asian relatives. Planet Earth was separated into several distinct ecosystems, each made up of a unique assembly of animals and plants. Homo sapiens was about to put an end to this biological exuberance.
Following the Cognitive Revolution, Sapiens acquired the technology, the organisational skills, and perhaps even the vision necessary to break out of Afro-Asia and settle the Outer World. Their first achievement was the colonisation of Australia some 45,000 years ago. Experts are hard-pressed to explain this feat. In order to reach Australia, humans had to cross a number of sea channels, some more than 60 miles wide, and upon arrival they had to adapt nearly overnight to a completely new ecosystem.
The most reasonable theory suggests that, about 45,000 years ago, the Sapiens living in the Indonesian archipelago (a group of islands separated from Asia and from each other by only narrow straits) developed the first seafaring societies. They learned how to build and manoeuvre ocean-going vessels and became long-distance fishermen, traders and explorers. This would have brought about an unprecedented transformation in human capabilities and lifestyles. Every other mammal that went to sea – seals, sea cows, dolphins – had to evolve for aeons to develop specialised organs and a hydrodynamic body. The Sapiens in Indonesia, descendants of apes who lived on the African savannah, became Pacific seafarers without growing flippers and without having to wait for their noses to migrate to the top of their heads as whales did. Instead, they built boats and learned how to steer them. And these skills enabled them to reach and settle Australia.
True, archaeologists have yet to unearth rafts, oars or fishing villages that date back as far as 45,000 years ago (they would be difficult to discover, because rising sea levels have buried the ancient Indonesian shoreline under 300 feet of ocean). Nevertheless, there is strong circumstantial evidence to support this theory, especially the fact that in the thousands of years following the settlement of Australia, Sapiens colonised a large number of small and isolated islands to its north. Some, such as Buka and Manus, were separated from the closest land by 120 miles of open water. It’s hard to believe that anyone could have reached and colonised Manus without sophisticated vessels and sailing skills. As mentioned earlier, there is also firm evidence for regular sea trade between some of these islands, such as New Ireland and New Britain.1
The journey of the first humans to Australia is one of the most important events in history, at least as important as Columbus’ journey to America or the Apollo II expedition to the moon. It was the first time any human had managed to leave the Afro-Asian ecological system – indeed, the first time any large terrestrial mammal had managed to cross from Afro-Asia to Australia. Of even greater importance was what the human pioneers did in this new world. The moment the first hunter-gatherer set foot on an Australian beach was the moment that Homo sapiens climbed to the top rung in the food chain on a particular landmass and thereafter became the deadliest species in the annals of planet Earth.
Up until then humans had displayed some innovative adaptations and behaviours, but their effect on their environment had been negligible. They had demonstrated remarkable success in moving into and adjusting to various habitats, but they did so without drastically changing those habitats. The settlers of Australia, or more accurately, its conquerors, didn’t just adapt, they transformed the Australian ecosystem beyond recognition.
The first human footprint on a sandy Australian beach was immediately washed away by the waves. Yet when the invaders advanced inland, they left behind a different footprint, one that would never be expunged. As they pushed on, they encountered a strange universe of unknown creatures that included a 450-pound, six-foot kangaroo, and a marsupial lion, as massive as a modern tiger, that was the continent’s largest predator. Koalas far too big to be cuddly and cute rustled in the trees and flightless birds twice the size of ostriches sprinted on the plains. Dragon-like lizards and snakes seven feet long slithered through the undergrowth. The giant diprotodon, a two-and-a-half-ton wombat, roamed the forests. Except for the birds and reptiles, all these animals were marsupials – like kangaroos, they gave birth to tiny, helpless, fetus-like young which they then nurtured with milk in abdominal pouches. Marsupial mammals were almost unknown in Africa and Asia, but in Australia they reigned supreme.
Within a few thousand years, virtually all of these giants vanished. Of the twenty-four Australian animal species weighing 100 pounds or more, twenty-three became extinct.2 A large number of smaller species also disappeared. Food chains throughout the entire Australian ecosystem were broken and rearranged. It was the most important transformation of the Australian ecosystem for millions of years. Was it all the fault of Homo sapiens?
Some scholars try to exonerate our species, placing the blame on the vagaries of the climate (the usual scapegoat in such cases). Yet it is hard to believe that Homo sapiens was completely innocent. There are three pieces of evidence that weaken the climate alibi, and implicate our ancestors in the extinction of the Australian megafauna.
Firstly, even though Australia’s climate changed some 45,000 years ago, it wasn’t a very remarkable upheaval. It’s hard to see how the new weather patterns alone could have caused such a massive extinction. It’s common today to explain anything and everything as the result of climate change, but the truth is that earth’s climate never rests. It is in constant flux. Every event in history occurred against the background of some climate change.
In particular, our planet has experienced numerous cycles of cooling and warming. During the last million years, there has been an ice age on average every 100,000 years. The last one ran from about 75,000 to 15,000 years ago. Not unusually severe for an ice age, it had twin peaks, the first about 70,000 years ago and the second at about 20,000 years ago. The giant diprotodon appeared in Australia more than 1.5 million years ago and successfully weathered at least ten previous ice ages. It also survived the first peak of the last ice age, around 70,000 years ago. Why, then, did it disappear 45,000 years ago? Of course, if diprotodons had been the only large animal to disappear at this time, it might have been just a fluke. But more than 90 percent of Australia’s megafauna disappeared along with the diprotodon. The evidence is circumstantial, but it’s hard to imagine that Sapiens, just by coincidence, arrived in Australia at the precise point that all these animals were dropping dead of the chills.3
Secondly, when climate change causes mass extinctions, sea creatures are usually hit as hard as land dwellers. Yet there is no evidence of any significant disappearance of oceanic fauna 45,000 years ago. Human involvement can easily explain why the wave of extinction obliterated the terrestrial megafauna of Australia while sparing that of the nearby oceans. Despite its burgeoning navigational abilities, Homo sapiens was still overwhelmingly a terrestrial menace.
Thirdly, mass extinctions akin to the archetypal Australian decimation occurred again and again in the ensuing millennia – whenever people settled another part of the Outer World. In these cases Sapiens guilt is irrefutable. For example, the megafauna of New Zealand – which had weathered the alleged ‘climate change’ of c.45,000 years ago without a scratch – suffered devastating blows immediately after the first humans set foot on the islands. The Maoris, New Zealand’s first Sapiens colonisers, reached the islands about 800 years ago. Within a couple of centuries, the majority of the local megafauna was extinct, along with 60 per cent of all bird species.
A similar fate befell the mammoth population of Wrangel Island in the Arctic Ocean (125 miles north of the Siberian coast). Mammoths had flourished for millions of years over most of the northern hemisphere, but as Homo sapiens spread – first over Eurasia and then over North America – the mammoths retreated. By 10,000 years ago there was not a single mammoth to be found in the world, except on a few remote Arctic islands, most conspicuously Wrangel. The mammoths of Wrangel continued to prosper for a few more millennia, then suddenly disappeared about 4,000 years ago, just when the first humans reached the island.
Were the Australian extinction an isolated event, we could grant humans the benefit of the doubt. But the historical record makes Homo sapiens look like an ecological serial killer.
All the settlers of Australia had at their disposal was Stone Age technology. How could they cause an ecological disaster? There are three explanations that mesh quite nicely.
Large animals – the primary victims of the Australian extinction – breed slowly. Pregnancy is long, offspring per pregnancy are few, and there are long breaks between pregnancies. Consequently, if humans cut down even one diprotodon every few months, it would be enough to cause diprotodon deaths to outnumber births. Within a few thousand years the last, lonesome diprotodon would pass away, and with her the entire species.4
In fact, for all their size, diprotodons and Australia’s other giants probably wouldn’t have been that hard to hunt because they would have been taken totally by surprise by their two-legged assailants. Various human species had been prowling and evolving in Afro-Asia for 2 million years. They slowly honed their hunting skills, and began going after large animals around 400,000 years ago. The big beasts of Africa and Asia learned to avoid humans, so when the new mega-predator – Homo sapiens – appeared on the Afro-Asian scene, the large animals already knew to keep their distance from creatures that looked like it. In contrast, the Australian giants had no time to learn to run away. Humans don’t come across as particularly dangerous. They don’t have long, sharp teeth or muscular, lithe bodies. So when a diprotodon, the largest marsupial ever to walk the earth, set eyes for the first time on this frail-looking ape, he probably gave it one glance and then went back to chewing leaves. These animals had to evolve a fear of humankind, but before they could do so they were gone.
The second explanation is that by the time Sapiens reached Australia, they had already mastered fire agriculture. Faced with an alien and threatening environment, it seems that they deliberately burned vast areas of impassable thickets and dense forests to create open grasslands, which attracted more easily hunted game, and were better suited to their needs. They thereby completely changed the ecology of large parts of Australia within a few short millennia.
One body of evidence supporting this view is the fossil plant record. Eucalyptus trees were rare in Australia 45,000 years ago. But the arrival of Homo sapiens inaugurated a golden age for the species. Since eucalyptuses are particularly resistant to fire, they spread far and wide while other trees and shrubs disappeared.
These changes in vegetation influenced the animals that ate the plants and the carnivores that ate the vegetarians. Koalas, which subsist exclusively on eucalyptus leaves, happily munched their way into new territories. Most other animals suffered greatly. Many Australian food chains collapsed, driving the weakest links into extinction.5
A third explanation agrees that hunting and fire agriculture played a significant role in the extinction, but emphasises that we can’t completely ignore the role of climate. The climate changes that beset Australia about 45,000 years ago destabilised the ecosystem and made it particularly vulnerable. Under normal circumstances the system would probably have recuperated, as had happened many times previously. However, humans appeared on the stage at just this critical juncture and pushed the brittle ecosystem into the abyss. The combination of climate change and human hunting is particularly devastating for large animals, since it attacks them from different angles. It is hard to find a good survival strategy that will work simultaneously against multiple threats.
Without further evidence, there’s no way of deciding between the three scenarios. But there are certainly good reasons to believe that if Homo sapiens had never gone Down Under, it would still be home to marsupial lions, diprotodons and giant kangaroos.
The extinction of the Australian megafauna was probably the first significant mark Homo sapiens left on our planet. It was followed by an even larger ecological disaster, this time in America. Homo sapiens was the first and only human species to reach the western hemisphere landmass, arriving about 16,000 years ago, that is in or around 14,000 BC. The first Americans arrived on foot, which they could do because, at the time, sea levels were low enough that a land bridge connected north-eastern Siberia with north-western Alaska. Not that it was easy – the journey was an arduous one, perhaps harder than the sea passage to Australia. To make the crossing, Sapiens first had to learn how to withstand the extreme Arctic conditions of northern Siberia, an area on which the sun never shines in winter, and where temperatures can drop to minus sixty degrees Fahrenheit.
No previous human species had managed to penetrate places like northern Siberia. Even the cold-adapted Neanderthals restricted themselves to relatively warmer regions further south. But Homo sapiens, whose body was adapted to living in the African savannah rather than in the lands of snow and ice, devised ingenious solutions. When roaming bands of Sapiens foragers migrated into colder climates, they learned to make snowshoes and effective thermal clothing composed of layers of furs and skins, sewn together tightly with the help of needles. They developed new weapons and sophisticated hunting techniques that enabled them to track and kill mammoths and the other big game of the far north. As their thermal clothing and hunting techniques improved, Sapiens dared to venture deeper and deeper into the frozen regions. And as they moved north, their clothes, hunting strategies and other survival skills continued to improve.
But why did they bother? Why banish oneself to Siberia by choice? Perhaps some bands were driven north by wars, demographic pressures or natural disasters. Others might have been lured northwards by more positive reasons, such as animal protein. The Arctic lands were full of large, juicy animals such as reindeer and mammoths. Every mammoth was a source of a vast quantity of meat (which, given the frosty temperatures, could even be frozen for later use), tasty fat, warm fur and valuable ivory. As the findings from Sungir testify, mammoth-hunters did not just survive in the frozen north – they thrived. As time passed, the bands spread far and wide, pursuing mammoths, mastodons, rhinoceroses and reindeer. Around 14,000 BC, the chase took some of them from north-eastern Siberia to Alaska. Of course, they didn’t know they were discovering a new world. For mammoth and man alike, Alaska was a mere extension of Siberia.
At first, glaciers blocked the way from Alaska to the rest of America, allowing no more than perhaps a few isolated pioneers to investigate the lands further south. However, around 12,000 BC global warming melted the ice and opened an easier passage. Making use of the new corridor, people moved south en masse, spreading over the entire continent. Though originally adapted to hunting large game in the Arctic, they soon adjusted to an amazing variety of climates and ecosystems. Descendants of the Siberians settled the thick forests of the eastern United States, the swamps of the Mississippi Delta, the deserts of Mexico and steaming jungles of Central America. Some made their homes in the river world of the Amazon basin, others struck roots in Andean mountain valleys or the open pampas of Argentina. And all this happened in a mere millennium or two! By 10,000 BC, humans already inhabited the most southern point in America, the island of Tierra del Fuego at the continent’s southern tip. The human blitzkrieg across America testifies to the incomparable ingenuity and the unsurpassed adaptability of Homo sapiens. No other animal had ever moved into such a huge variety of radically different habitats so quickly, everywhere using virtually the same genes.6
The settling of America was hardly bloodless. It left behind a long trail of victims. American fauna 14,000 years ago was far richer than it is today. When the first Americans marched south from Alaska into the plains of Canada and the western United States, they encountered mammoths and mastodons, rodents the size of bears, herds of horses and camels, oversized lions and dozens of large species the likes of which are completely unknown today, among them fearsome sabre-tooth cats and giant ground sloths that weighed up to eight tons and reached a height of twenty feet. South America hosted an even more exotic menagerie of large mammals, reptiles and birds. The Americas were a great laboratory of evolutionary experimentation, a place where animals and plants unknown in Africa and Asia had evolved and thrived.
But no longer. Within 2,000 years of the Sapiens arrival, most of these unique species were gone. According to current estimates, within that short interval, North America lost thirty-four out of its forty-seven genera of large mammals. South America lost fifty out of sixty. The sabre-tooth cats, after flourishing for more than 30 million years, disappeared, and so did the giant ground sloths, the oversized lions, native American horses, native American camels, the giant rodents and the mammoths. Thousands of species of smaller mammals, reptiles, birds, and even insects and parasites also became extinct (when the mammoths died out, all species of mammoth ticks followed them to oblivion).
For decades, palaeontologists and zooarchaeologists – people who search for and study animal remains – have been combing the plains and mountains of the Americas in search of the fossilised bones of ancient camels and the petrified faeces of giant ground sloths. When they find what they seek, the treasures are carefully packed up and sent to laboratories, where every bone and every coprolite (the technical name for fossilised turds) is meticulously studied and dated. Time and again, these analyses yield the same results: the freshest dung balls and the most recent camel bones date to the period when humans flooded America, that is, between approximately 12,000 and 9000 BC. Only in one area have scientists discovered younger dung balls: on several Caribbean islands, in particular Cuba and Hispaniola, they found petrified ground-sloth scat dating to about 5000 BC. This is exactly the time when the first humans managed to cross the Caribbean Sea and settle these two large islands.
Again, some scholars try to exonerate Homo sapiens and blame climate change (which requires them to posit that, for some mysterious reason, the climate in the Caribbean islands remained static for 7,000 years while the rest of the western hemisphere warmed). But in America, the dung ball cannot be dodged. We are the culprits. There is no way around that truth. Even if climate change abetted us, the human contribution was decisive.7
If we combine the mass extinctions in Australia and America, and add the smaller-scale extinctions that took place as Homo sapiens spread over Afro-Asia – such as the extinction of all other human species – and the extinctions that occurred when ancient foragers settled remote islands such as Cuba, the inevitable conclusion is that the first wave of Sapiens colonisation was one of the biggest and swiftest ecological disasters to befall the animal kingdom. Hardest hit were the large furry creatures. At the time of the Cognitive Revolution, the planet was home to about 200 genera of large terrestrial mammals weighing over 100 pounds. At the time of the Agricultural Revolution, only about a hundred remained. Homo sapiens drove to extinction about half of the planet’s big beasts long before humans invented the wheel, writing, or iron tools.
This ecological tragedy was restaged in miniature countless times after the Agricultural Revolution. The archaeological record of island after island tells the same sad story. The tragedy opens with a scene showing a rich and varied population of large animals, without any trace of humans. In scene two, Sapiens appear, evidenced by a human bone, a spear point, or perhaps a potsherd. Scene three quickly follows, in which men and women occupy centre stage and most large animals, along with many smaller ones, are gone.
The large island of Madagascar, about 250 miles east of the African mainland, offers a famous example. Through millions of years of isolation, a unique collection of animals evolved there. These included the elephant bird, a flightless creature ten feet tall and weighing almost half a ton – the largest bird in the world – and the giant lemurs, the globe’s largest primates. The elephant birds and the giant lemurs, along with most of the other large animals of Madagascar, suddenly vanished about 1,500 years ago – precisely when the first humans set foot on the island.
10. Reconstructions of two giant ground sloths (Megatherium) and behind them two giant armadillos (Glyptodon). Now extinct, giant armadillos measured over ten feet in length and weighed up to two tons, whereas giant ground sloths reached heights of up to twenty feet, and weighed up to eight tons.
{Poster: Waterhouse Hawkins, c.1862 © The Trustees of the Natural History Museum.}
In the Pacific Ocean, the main wave of extinction began in about 1500 BC, when Polynesian farmers settled the Solomon Islands, Fiji and New Caledonia. They killed off, directly or indirectly, hundreds of species of birds, insects, snails and other local inhabitants. From there, the wave of extinction moved gradually to the east, the south and the north, into the heart of the Pacific Ocean, obliterating on its way the unique fauna of Samoa and Tonga (1200 BC); the Marquis Islands (AD 1); Easter Island, the Cook Islands and Hawaii (AD 500); and finally New Zealand (AD 1200).
Similar ecological disasters occurred on almost every one of the thousands of islands that pepper the Atlantic Ocean, Indian Ocean, Arctic Ocean and Mediterranean Sea. Archaeologists have discovered on even the tiniest islands evidence of the existence of birds, insects and snails that lived there for countless generations, only to vanish when the first human farmers arrived. None but a few extremely remote islands escaped man’s notice until the modern age, and these islands kept their fauna intact. The Galapagos Islands, to give one famous example, remained uninhabited by humans until the nineteenth century, thus preserving their unique menagerie, including their giant tortoises, which, like the ancient diprotodons, show no fear of humans.
The First Wave Extinction, which accompanied the spread of the foragers, was followed by the Second Wave Extinction, which accompanied the spread of the farmers, and gives us an important perspective on the Third Wave Extinction, which industrial activity is causing today. Don’t believe tree-huggers who claim that our ancestors lived in harmony with nature. Long before the Industrial Revolution, Homo sapiens held the record among all organisms for driving the most plant and animal species to their extinctions. We have the dubious distinction of being the deadliest species in the annals of biology.
Perhaps if more people were aware of the First Wave and Second Wave extinctions, they’d be less nonchalant about the Third Wave they are part of. If we knew how many species we’ve already eradicated, we might be more motivated to protect those that still survive. This is especially relevant to the large animals of the oceans. Unlike their terrestrial counterparts, the large sea animals suffered relatively little from the Cognitive and Agricultural Revolutions. But many of them are on the brink of extinction now as a result of industrial pollution and human overuse of oceanic resources. If things continue at the present pace, it is likely that whales, sharks, tuna and dolphins will follow the diprotodons, ground sloths and mammoths to oblivion. Among all the world’s large creatures, the only survivors of the human flood will be humans themselves, and the farmyard animals that serve as galley slaves in Noah’s Ark.
11. A wall painting from an Egyptian grave, dated to about 3,500 years ago, depicting typical agricultural scenes.
{© Visual/Corbis.}
FOR 2.5 MILLION YEARS HUMANS FED themselves by gathering plants and hunting animals that lived and bred without their intervention. Homo erectus, Homo ergaster and the Neanderthals plucked wild figs and hunted wild sheep without deciding where fig trees would take root, in which meadow a herd of sheep should graze, or which billy goat would inseminate which nanny goat. Homo sapiens spread from East Africa to the Middle East, to Europe and Asia, and finally to Australia and America – but everywhere they went, Sapiens too continued to live by gathering wild plants and hunting wild animals. Why do anything else when your lifestyle feeds you amply and supports a rich world of social structures, religious beliefs and political dynamics?
All this changed about 10,000 years ago, when Sapiens began to devote almost all their time and effort to manipulating the lives of a few animal and plant species. From sunrise to sunset humans sowed seeds, watered plants, plucked weeds from the ground and led sheep to prime pastures. This work, they thought, would provide them with more fruit, grain and meat. It was a revolution in the way humans lived – the Agricultural Revolution.
The transition to agriculture began around 9500–8500 BC in the hill country of south-eastern Turkey, western Iran, and the Levant. It began slowly and in a restricted geographical area. Wheat and goats were domesticated by approximately 9000 BC; peas and lentils around 8000 BC; olive trees by 5000 BC; horses by 4000 BC; and grapevines in 3500 BC. Some animals and plants, such as camels and cashew nuts, were domesticated even later, but by 3500 BC the main wave of domestication was over. Even today, with all our advanced technologies, more than 90 percent of the calories that feed humanity come from the handful of plants that our ancestors domesticated between 9500 and 3500 BC – wheat, rice, maize (called ‘corn’ in the US), potatoes, millet and barley. No noteworthy plant or animal has been domesticated in the last 2,000 years. If our minds are those of hunter-gatherers, our cuisine is that of ancient farmers.
Scholars once believed that agriculture spread from a single Middle Eastern point of origin to the four corners of the world. Today, scholars agree that agriculture sprang up in other parts of the world not by the action of Middle Eastern farmers exporting their revolution but entirely independently. People in Central America domesticated maize and beans without knowing anything about wheat and pea cultivation in the Middle East. South Americans learned how to raise potatoes and llamas, unaware of what was going on in either Mexico or the Levant. China’s first revolutionaries domesticated rice, millet and pigs. North America’s first gardeners were those who got tired of combing the undergrowth for edible gourds and decided to cultivate pumpkins. New Guineans tamed sugar cane and bananas, while the first West African farmers made African millet, African rice, sorghum and wheat conform to their needs. From these initial focal points, agriculture spread far and wide. By the first century AD the vast majority of people throughout most of the world were agriculturists.
Why did agricultural revolutions erupt in the Middle East, China and Central America but not in Australia, Alaska or South Africa? The reason is simple: most species of plants and animals can’t be domesticated. Sapiens could dig up delicious truffles and hunt down woolly mammoths, but domesticating either species was out of the question. The fungi were far too elusive, the giant beasts too ferocious. Of the thousands of species that our ancestors hunted and gathered, only a few were suitable candidates for farming and herding. Those few species lived in particular places, and those are the places where agricultural revolutions occurred.
Scholars once proclaimed that the agricultural revolution was a great leap forward for humanity. They told a tale of progress fuelled by human brain power. Evolution gradually produced ever more intelligent people. Eventually, people were so smart that they were able to decipher nature’s secrets, enabling them to tame sheep and cultivate wheat. As soon as this happened, they cheerfully abandoned the gruelling, dangerous, and often spartan life of hunter-gatherers, settling down to enjoy the pleasant, satiated life of farmers.
Map 2. Locations and dates of agricultural revolutions. The data is contentious, and the map is constantly being redrawn to incorporate the latest archaeological discoveries.1
{Maps by Neil Gower}
That tale is a fantasy. There is no evidence that people became more intelligent with time. Foragers knew the secrets of nature long before the Agricultural Revolution, since their survival depended on an intimate knowledge of the animals they hunted and the plants they gathered. Rather than heralding a new era of easy living, the Agricultural Revolution left farmers with lives generally more difficult and less satisfying than those of foragers. Hunter-gatherers spent their time in more stimulating and varied ways, and were less in danger of starvation and disease. The Agricultural Revolution certainly enlarged the sum total of food at the disposal of humankind, but the extra food did not translate into a better diet or more leisure. Rather, it translated into population explosions and pampered elites. The average farmer worked harder than the average forager, and got a worse diet in return. The Agricultural Revolution was history’s biggest fraud.2
Who was responsible? Neither kings, nor priests, nor merchants. The culprits were a handful of plant species, including wheat, rice and potatoes. These plants domesticated Homo sapiens, rather than vice versa.
Think for a moment about the Agricultural Revolution from the viewpoint of wheat. Ten thousand years ago wheat was just a wild grass, one of many, confined to a small range in the Middle East. Suddenly, within just a few short millennia, it was growing all over the world. According to the basic evolutionary criteria of survival and reproduction, wheat has become one of the most successful plants in the history of the earth. In areas such as the Great Plains of North America, where not a single wheat stalk grew 10,000 years ago, you can today walk for hundreds upon hundreds of miles without encountering any other plant. Worldwide, wheat covers about 870,000 square miles of the globe’s surface, almost ten times the size of Britain. How did this grass turn from insignificant to ubiquitous?
Wheat did it by manipulating Homo sapiens to its advantage. This ape had been living a fairly comfortable life hunting and gathering until about 10,000 years ago, but then began to invest more and more effort in cultivating wheat. Within a couple of millennia, humans in many parts of the world were doing little from dawn to dusk other than taking care of wheat plants. It wasn’t easy. Wheat demanded a lot of them. Wheat didn’t like rocks and pebbles, so Sapiens broke their backs clearing fields. Wheat didn’t like sharing its space, water and nutrients with other plants, so men and women laboured long days weeding under the scorching sun. Wheat got sick, so Sapiens had to keep a watch out for worms and blight. Wheat was attacked by rabbits and locust swarms, so the farmers built fences and stood guard over the fields. Wheat was thirsty, so humans dug irrigation canals or lugged heavy buckets from the well to water it. Sapiens even collected animal faeces to nourish the ground in which wheat grew.
The body of Homo sapiens had not evolved for such tasks. It was adapted to climbing apple trees and running after gazelles, not to clearing rocks and carrying water buckets. Human spines, knees, necks and arches paid the price. Studies of ancient skeletons indicate that the transition to agriculture brought about a plethora of ailments, such as slipped discs, arthritis and hernias. Moreover, the new agricultural tasks demanded so much time that people were forced to settle permanently next to their wheat fields. This completely changed their way of life. We did not domesticate wheat. It domesticated us. The word ‘domesticate’ comes from the Latin domus, which means ‘house’. Who’s the one living in a house? Not the wheat. It’s the Sapiens.
How did wheat convince Homo sapiens to exchange a rather good life for a more miserable existence? What did it offer in return? It did not offer a better diet. Remember, humans are omnivorous apes who thrive on a wide variety of foods. Grains made up only a small fraction of the human diet before the Agricultural Revolution. A diet based on cereals is poor in minerals and vitamins, hard to digest, and really bad for your teeth and gums.
Wheat did not give people economic security. The life of a peasant is less secure than that of a hunter-gatherer. Foragers relied on dozens of species to survive, and could therefore weather difficult years even without stocks of preserved food. If the availability of one species was reduced, they could gather and hunt more of other species. Farming societies have, until very recently, relied for the great bulk of their calorie intake on a small variety of domesticated plants. In many areas, they relied on just a single staple, such as wheat, potatoes or rice. If the rains failed or clouds of locusts arrived or if a fungus infected that staple species, peasants died by the thousands and millions.
Nor could wheat offer security against human violence. The early farmers were at least as violent as their forager ancestors, if not more so. Farmers had more possessions and needed land for planting. The loss of pasture land to raiding neighbours could mean the difference between subsistence and starvation, so there was much less room for compromise. When a foraging band was hard-pressed by a stronger rival, it could usually move on. It was difficult and dangerous, but it was feasible. When a strong enemy threatened an agricultural village, retreat meant giving up fields, houses and granaries. In many cases, this doomed the refugees to starvation. Farmers, therefore, tended to stay put and fight to the bitter end.
12. Tribal warfare in New Guinea between two farming communities (1960). Such scenes were probably widespread in the thousands of years following the Agricultural Revolution.
{Photo: Karl G. Heider © President and Fellows of Harvard College, Peabody Museum of Archaeology and Ethnology, PM# 2006.17.1.89.2 (digital file# 98770053).}
Many anthropological and archaeological studies indicate that in simple agricultural societies with no political frameworks beyond village and tribe, human violence was responsible for about 15 per cent of deaths, including 25 per cent of male deaths. In contemporary New Guinea, violence accounts for 30 per cent of male deaths in one agricultural tribal society, the Dani, and 35 per cent in another, the Enga. In Ecuador, perhaps 50 per cent of adult Waoranis meet a violent death at the hands of another human!3 In time, human violence was brought under control through the development of larger social frameworks – cities, kingdoms and states. But it took thousands of years to build such huge and effective political structures.
Village life certainly brought the first farmers some immediate benefits, such as better protection against wild animals, rain and cold. Yet for the average person, the disadvantages probably outweighed the advantages. This is hard for people in today’s prosperous societies to appreciate. Since we enjoy affluence and security, and since our affluence and security are built on foundations laid by the Agricultural Revolution, we assume that the Agricultural Revolution was a wonderful improvement. Yet it is wrong to judge thousands of years of history from the perspective of today. A much more representative viewpoint is that of a three-year-old girl dying from malnutrition in first-century China because her father’s crops have failed. Would she say ‘I am dying from malnutrition, but in 2,000 years, people will have plenty to eat and live in big air-conditioned houses, so my suffering is a worthwhile sacrifice’?
What then did wheat offer agriculturists, including that malnourished Chinese girl? It offered nothing for people as individuals. Yet it did bestow something on Homo sapiens as a species. Cultivating wheat provided much more food per unit of territory, and thereby enabled Homo sapiens to multiply exponentially. Around 13,000 BC, when people fed themselves by gathering wild plants and hunting wild animals, the area around the oasis of Jericho, in Palestine, could support at most one roaming band of about a hundred relatively healthy and well-nourished people. Around 8500 BC, when wild plants gave way to wheat fields, the oasis supported a large but cramped village of 1,000 people, who suffered far more from disease and malnourishment.
The currency of evolution is neither hunger nor pain, but rather copies of DNA helixes. Just as the economic success of a company is measured only by the number of dollars in its bank account, not by the happiness of its employees, so the evolutionary success of a species is measured by the number of copies of its DNA. If no more DNA copies remain, the species is extinct, just as a company without money is bankrupt. If a species boasts many DNA copies, it is a success, and the species flourishes. From such a perspective, 1,000 copies are always better than a hundred copies. This is the essence of the Agricultural Revolution: the ability to keep more people alive under worse conditions.
Yet why should individuals care about this evolutionary calculus? Why would any sane person lower his or her standard of living just to multiply the number of copies of the Homo sapiens genome? Nobody agreed to this deal: the Agricultural Revolution was a trap.
The rise of farming was a very gradual affair spread over centuries and millennia. A band of Homo sapiens gathering mushrooms and nuts and hunting deer and rabbit did not all of a sudden settle in a permanent village, ploughing fields, sowing wheat and carrying water from the river. The change proceeded by stages, each of which involved just a small alteration in daily life.
Homo sapiens reached the Middle East around 70,000 years ago. For the next 50,000 years our ancestors flourished there without agriculture. The natural resources of the area were enough to support its human population. In times of plenty people had a few more children, and in times of need a few less. Humans, like many mammals, have hormonal and genetic mechanisms that help control procreation. In good times females reach puberty earlier, and their chances of getting pregnant are a bit higher. In bad times puberty is late and fertility decreases.
To these natural population controls were added cultural mechanisms. Babies and small children, who move slowly and demand much attention, were a burden on nomadic foragers. People tried to space their children three to four years apart. Women did so by nursing their children around the clock and until a late age (around-the-clock suckling significantly decreases the chances of getting pregnant). Other methods included full or partial sexual abstinence (backed perhaps by cultural taboos), abortions and occasionally infanticide.4
During these long millennia people occasionally ate wheat grain, but this was a marginal part of their diet. About 18,000 years ago, the last ice age gave way to a period of global warming. As temperatures rose, so did rainfall. The new climate was ideal for Middle Eastern wheat and other cereals, which multiplied and spread. People began eating more wheat, and in exchange they inadvertently spread its growth. Since it was impossible to eat wild grains without first winnowing, grinding and cooking them, people who gathered these grains carried them back to their temporary campsites for processing. Wheat grains are small and numerous, so some of them inevitably fell on the way to the campsite and were lost. Over time, more and more wheat grew along favourite human trails and near campsites.
When humans burned down forests and thickets, this also helped wheat. Fire cleared away trees and shrubs, allowing wheat and other grasses to monopolise the sunlight, water and nutrients. Where wheat became particularly abundant, and game and other food sources were also plentiful, human bands could gradually give up their nomadic lifestyle and settle down in seasonal and even permanent camps.
At first they might have camped for four weeks during the harvest. A generation later, as wheat plants multiplied and spread, the harvest camp might have lasted for five weeks, then six, and finally it became a permanent village. Evidence of such settlements has been discovered throughout the Middle East, particularly in the Levant, where the Natufian culture flourished from 12,500 BC to 9500 BC. The Natufians were hunter-gatherers who subsisted on dozens of wild species, but they lived in permanent villages and devoted much of their time to the intensive gathering and processing of wild cereals. They built stone houses and granaries. They stored grain for times of need. They invented new tools such as stone scythes for harvesting wild wheat, and stone pestles and mortars to grind it.
In the years following 9500 BC, the descendants of the Natufians continued to gather and process cereals, but they also began to cultivate them in more and more elaborate ways. When gathering wild grains, they took care to lay aside part of the harvest to sow the fields next season. They discovered that they could achieve much better results by sowing the grains deep in the ground rather than haphazardly scattering them on the surface. So they began to hoe and plough. Gradually they also started to weed the fields, to guard them against parasites, and to water and fertilise them. As more effort was directed towards cereal cultivation, there was less time to gather and hunt wild species. The foragers became farmers.
No single step separated the woman gathering wild wheat from the woman farming domesticated wheat, so it’s hard to say exactly when the decisive transition to agriculture took place. But, by 8500 BC, the Middle East was peppered with permanent villages such as Jericho, whose inhabitants spent most of their time cultivating a few domesticated species.
With the move to permanent villages and the increase in food supply, the population began to grow. Giving up the nomadic lifestyle enabled women to have a child every year. Babies were weaned at an earlier age – they could be fed on porridge and gruel. The extra hands were sorely needed in the fields. But the extra mouths quickly wiped out the food surpluses, so even more fields had to be planted. As people began living in disease-ridden settlements, as children fed more on cereals and less on mother’s milk, and as each child competed for his or her porridge with more and more siblings, child mortality soared. In most agricultural societies at least one out of every three children died before reaching twenty.5 Yet the increase in births still outpaced the increase in deaths; humans kept having larger numbers of children.
With time, the ‘wheat bargain’ became more and more burdensome. Children died in droves, and adults ate bread by the sweat of their brows. The average person in Jericho of 8500 BC lived a harder life than the average person in Jericho of 9500 BC or 13,000 BC. But nobody realised what was happening. Every generation continued to live like the previous generation, making only small improvements here and there in the way things were done. Paradoxically, a series of ‘improvements’, each of which was meant to make life easier, added up to a millstone around the necks of these farmers.
Why did people make such a fateful miscalculation? For the same reason that people throughout history have miscalculated. People were unable to fathom the full consequences of their decisions. Whenever they decided to do a bit of extra work – say, to hoe the fields instead of scattering seeds on the surface – people thought, ‘Yes, we will have to work harder. But the harvest will be so bountiful! We won’t have to worry any more about lean years. Our children will never go to sleep hungry.’ It made sense. If you worked harder, you would have a better life. That was the plan.
The first part of the plan went smoothly. People indeed worked harder. But people did not foresee that the number of children would increase, meaning that the extra wheat would have to be shared between more children. Neither did the early farmers understand that feeding children with more porridge and less breast milk would weaken their immune system, and that permanent settlements would be hotbeds for infectious diseases. They did not foresee that by increasing their dependence on a single source of food, they were actually exposing themselves even more to the depredations of drought. Nor did the farmers foresee that in good years their bulging granaries would tempt thieves and enemies, compelling them to start building walls and doing guard duty.
Then why didn’t humans abandon farming when the plan backfired? Partly because it took generations for the small changes to accumulate and transform society and, by then, nobody remembered that they had ever lived differently. And partly because population growth burned humanity’s boats. If the adoption of ploughing increased a village’s population from a hundred to 110, which ten people would have volunteered to starve so that the others could go back to the good old times? There was no going back. The trap snapped shut.
The pursuit of an easier life resulted in much hardship, and not for the last time. It happens to us today. How many young college graduates have taken demanding jobs in high-powered firms, vowing that they will work hard to earn money that will enable them to retire and pursue their real interests when they are thirty-five? But by the time they reach that age, they have large mortgages, children to school, houses in the suburbs that necessitate at least two cars per family, and a sense that life is not worth living without really good wine and expensive holidays abroad. What are they supposed to do, go back to digging up roots? No, they double their efforts and keep slaving away.
One of history’s few iron laws is that luxuries tend to become necessities and to spawn new obligations. Once people get used to a certain luxury, they take it for granted. Then they begin to count on it. Finally they reach a point where they can’t live without it. Let’s take another familiar example from our own time. Over the last few decades, we have invented countless time-saving devices that are supposed to make life more relaxed – washing machines, vacuum cleaners, dishwashers, telephones, mobile phones, computers, email. Previously it took a lot of work to write a letter, address and stamp an envelope, and take it to the mailbox. It took days or weeks, maybe even months, to get a reply. Nowadays I can dash off an email, send it halfway around the globe, and (if my addressee is online) receive a reply a minute later. I’ve saved all that trouble and time, but do I live a more relaxed life?
Sadly not. Back in the snail-mail era, people usually only wrote letters when they had something important to relate. Rather than writing the first thing that came into their heads, they considered carefully what they wanted to say and how to phrase it. They expected to receive a similarly considered answer. Most people wrote and received no more than a handful of letters a month and seldom felt compelled to reply immediately. Today I receive dozens of emails each day, all from people who expect a prompt reply. We thought we were saving time; instead we revved up the treadmill of life to ten times its former speed and made our days more anxious and agitated.
Here and there a Luddite holdout refuses to open an email account, just as thousands of years ago some human bands refused to take up farming and so escaped the luxury trap. But the Agricultural Revolution didn’t need every band in a given region to join up. It only took one. Once one band settled down and started tilling, whether in the Middle East or Central America, agriculture was irresistible. Since farming created the conditions for swift demographic growth, farmers could usually overcome foragers by sheer weight of numbers. The foragers could either run away, abandoning their hunting grounds to field and pasture, or take up the ploughshare themselves. Either way, the old life was doomed.
The story of the luxury trap carries with it an important lesson. Humanity’s search for an easier life released immense forces of change that transformed the world in ways nobody envisioned or wanted. Nobody plotted the Agricultural Revolution or sought human dependence on cereal cultivation. A series of trivial decisions aimed mostly at filling a few stomachs and gaining a little security had the cumulative effect of forcing ancient foragers to spend their days carrying water buckets under a scorching sun.
The above scenario explains the Agricultural Revolution as a miscalculation. It’s very plausible. History is full of far more idiotic miscalculations. But there’s another possibility. Maybe it wasn’t the search for an easier life that brought about the transformation. Maybe Sapiens had other aspirations, and were consciously willing to make their lives harder in order to achieve them.
Scientists usually seek to attribute historical developments to cold economic and demographic factors. It sits better with their rational and mathematical methods. In the case of modern history, scholars cannot avoid taking into account non-material factors such as ideology and culture. The written evidence forces their hand. We have enough documents, letters and memoirs to prove that World War Two was not caused by food shortages or demographic pressures. But we have no documents from the Natufian culture, so when dealing with ancient periods the materialist school reigns supreme. It is difficult to prove that preliterate people were motivated by faith rather than economic necessity.
Yet, in some rare cases, we are lucky enough to find telltale clues. In 1995 archaeologists began to excavate a site in south-east Turkey called Göbekli Tepe. In the oldest stratum they discovered no signs of a settlement, houses or daily activities. They did, however, find monumental pillared structures decorated with spectacular engravings. Each stone pillar weighed up to seven tons and reached a height of sixteen feet. In a nearby quarry they found a half-chiselled pillar weighing fifty tons. Altogether, they uncovered more than ten monumental structures, the largest of them nearly 100 feet across.
Archaeologists are familiar with such monumental structures from sites around the world – the best-known example is Stonehenge in Britain. Yet as they studied Göbekli Tepe, they discovered an amazing fact. Stonehenge dates to 2500 BC, and was built by a developed agricultural society. The structures at Göbekli Tepe are dated to about 9500 BC, and all available evidence indicates that they were built by hunter-gatherers. The archaeological community initially found it difficult to credit these findings, but one test after another confirmed both the early date of the structures and the pre-agricultural society of their builders. The capabilities of ancient foragers, and the complexity of their cultures, seem to be far more impressive than was previously suspected.
13. The remains of a monumental structure from Göbekli Tepe. Bottom: One of the decorated stone pillars (about sixteen feet high).
{Photos and © Deutsches Archäologisches Institut.}
Why would a foraging society build such structures? They had no obvious utilitarian purpose. They were neither mammoth slaughterhouses nor places to shelter from rain or hide from lions. That leaves us with the theory that they were built for some mysterious cultural purpose that archaeologists have a hard time deciphering. Whatever it was, the foragers thought it worth a huge amount of effort and time. The only way to build Göbekli Tepe was for thousands of foragers belonging to different bands and tribes to cooperate over an extended period of time. Only a sophisticated religious or ideological system could sustain such efforts.
Göbekli Tepe held another sensational secret. For many years, geneticists have been tracing the origins of domesticated wheat. Recent discoveries indicate that at least one domesticated variant, einkorn wheat, originated in the Karaçadag Hills – less than twenty miles from Göbekli Tepe.6
This can hardly be a coincidence. It’s likely that the cultural centre of Göbekli Tepe was somehow connected to the initial domestication of wheat by humankind and of humankind by wheat. In order to feed the people who built and used the monumental structures, particularly large quantities of food were required. It may well be that foragers switched from gathering wild wheat to intense wheat cultivation, not to increase their normal food supply, but rather to support the building and running of a temple. In the conventional picture, pioneers first built a village, and when it prospered, they set up a temple in the middle. But Göbekli Tepe suggests that the temple may have been built first, and that a village later grew up around it.
The Faustian bargain between humans and grains was not the only deal our species made. Another deal was struck concerning the fate of animals such as sheep, goats, pigs and chickens. Nomadic bands that stalked wild sheep gradually altered the constitutions of the herds on which they preyed. This process probably began with selective hunting. Humans learned that it was to their advantage to hunt only adult rams and old or sick sheep. They spared fertile females and young lambs in order to safeguard the long-term vitality of the local herd. The second step might have been to actively defend the herd against predators, driving away lions, wolves and rival human bands. The band might next have corralled the herd into a narrow gorge in order to better control and defend it. Finally, people began to make a more careful selection among the sheep in order to tailor them to human needs. The most aggressive rams, those that showed the greatest resistance to human control, were slaughtered first. So were the skinniest and most inquisitive females. (Shepherds are not fond of sheep whose curiosity takes them far from the herd.) With each passing generation, the sheep became fatter, more submissive and less curious. Voilà! Mary had a little lamb and everywhere that Mary went the lamb was sure to go.
Alternatively, hunters may have caught and ‘adopted’ a lamb, fattening it during the months of plenty and slaughtering it in the leaner season. At some stage they began keeping a greater number of such lambs. Some of these reached puberty and began to procreate. The most aggressive and unruly lambs were first to the slaughter. The most submissive, most appealing lambs were allowed to live longer and procreate. The result was a herd of domesticated and submissive sheep.
Such domesticated animals – sheep, chickens, donkeys and others – supplied food (meat, milk, eggs), raw materials (skins, wool), and muscle power. Transportation, ploughing, grinding and other tasks, hitherto performed by human sinew, were increasingly carried out by animals. In most farming societies people focused on plant cultivation; raising animals was a secondary activity. But a new kind of society also appeared in some places, based primarily on the exploitation of animals: tribes of pastoralist herders.
As humans spread around the world, so did their domesticated animals. Ten thousand years ago, not more than a few million sheep, cattle, goats, boars and chickens lived in restricted Afro-Asian niches. Today the world contains about a billion sheep, a billion pigs, more than a billion cattle, and more than 25 billion chickens. And they are all over the globe. The domesticated chicken is the most widespread fowl ever. Following Homo sapiens, domesticated cattle, pigs and sheep are the second, third and fourth most widespread large mammals in the world. From a narrow evolutionary perspective, which measures success by the number of DNA copies, the Agricultural Revolution was a wonderful boon for chickens, cattle, pigs and sheep.
Unfortunately, the evolutionary perspective is an incomplete measure of success. It judges everything by the criteria of survival and reproduction, with no regard for individual suffering and happiness. Domesticated chickens and cattle may well be an evolutionary success story, but they are also among the most miserable creatures that ever lived. The domestication of animals was founded on a series of brutal practices that only became crueller with the passing of the centuries.
The natural lifespan of wild chickens is about seven to twelve years, and of cattle about twenty to twenty-five years. In the wild, most chickens and cattle died long before that, but they still had a fair chance of living for a respectable number of years. In contrast, the vast majority of domesticated chickens and cattle are slaughtered at the age of between a few weeks and a few months, because this has always been the optimal slaughtering age from an economic perspective. (Why keep feeding a cock for three years if it has already reached its maximum weight after three months?)
Egg-laying hens, dairy cows and draught animals are sometimes allowed to live for many years. But the price is subjugation to a way of life completely alien to their urges and desires. It’s reasonable to assume, for example, that bulls prefer to spend their days wandering over open prairies in the company of other bulls and cows rather than pulling carts and ploughshares under the yoke of a whip-wielding ape.
In order for humans to turn bulls, horses, donkeys and camels into obedient draught animals, their natural instincts and social ties had to be broken, their aggression and sexuality contained, and their freedom of movement curtailed. Farmers developed techniques such as locking animals inside pens and cages, bridling them in harnesses and leashes, training them with whips and cattle prods, and mutilating them. The process of taming almost always involves the castration of males. This restrains male aggression and enables humans selectively to control the herd’s procreation.
14. A painting from an Egyptian grave, c.1200 BC: A pair of oxen ploughing a field. In the wild, cattle roamed as they pleased in herds with a complex social structure. The castrated and domesticated ox wasted away his life under the lash and in a narrow pen, labouring alone or in pairs in a way that suited neither its body nor its social and emotional needs. When an ox could no longer pull the plough, it was slaughtered. (Note the hunched position of the Egyptian farmer who, much like the ox, spent his life in hard labour oppressive to his body, his mind and his social relationships.)
{© Visual/Corbis.}
In many New Guinean societies, the wealth of a person has traditionally been determined by the number of pigs he or she owns. To ensure that the pigs can’t run away, farmers in northern New Guinea slice off a chunk of each pig’s nose. This causes severe pain whenever the pig tries to sniff. Since the pigs cannot find food or even find their way around without sniffing, this mutilation makes them completely dependent on their human owners. In another area of New Guinea, it has been customary to gouge out pigs’ eyes, so that they cannot even see where they’re going.7
The dairy industry has its own ways of forcing animals to do its will. Cows, goats and sheep produce milk only after giving birth to calves, kids and lambs, and only as long as the youngsters are suckling. To continue a supply of animal milk, a farmer needs to have calves, kids or lambs for suckling, but must prevent them from monopolising the milk. One common method throughout history was to simply slaughter the calves and kids shortly after birth, milk the mother for all she was worth, and then get her pregnant again. This is still a very widespread technique. In many modern dairy farms a milk cow usually lives for about five years before being slaughtered. During these five years she is almost constantly pregnant, and is fertilised within 60 to 120 days after giving birth in order to preserve maximum milk production. Her calves are separated from her shortly after birth. The females are reared to become the next generation of dairy cows, whereas the males are handed over to the care of the meat industry.8
Another method is to keep the calves and kids near their mothers, but prevent them by various stratagems from suckling too much milk. The simplest way to do that is to allow the kid or calf to start suckling, but drive it away once the milk starts flowing. This method usually encounters resistance from both kid and mother. Some shepherd tribes used to kill the offspring, eat its flesh, and then stuff the skin. The stuffed offspring was then presented to the mother so that its presence would encourage her milk production. The Nuer tribe in the Sudan went so far as to smear stuffed animals with their mother’s urine, to give the counterfeit calves a familiar, live scent. Another Nuer technique was to tie a ring of thorns around a calf’s mouth, so that it pricks the mother and causes her to resist suckling.9 Tuareg camel breeders in the Sahara used to puncture or cut off parts of the nose and upper lip of young camels in order to make suckling painful, thereby discouraging them from consuming too much milk.10
Not all agricultural societies were this cruel to their farm animals. The lives of some domesticated animals could be quite good. Sheep raised for wool, pet dogs and cats, war horses and race horses often enjoyed comfortable conditions. The Roman emperor Caligula allegedly planned to appoint his favourite horse, Incitatus, to the consulship. Shepherds and farmers throughout history showed affection for their animals and have taken great care of them, just as many slaveholders felt affection and concern for their slaves. It was no accident that kings and prophets styled themselves as shepherds and likened the way they and the gods cared for their people to a shepherd’s care for his flock.
15. A modern calf in an industrial meat farm. Immediately after birth the calf is separated from its mother and locked inside a tiny cage not much bigger than the calf’s own body. There the calf spends its entire life – about four months on average. It never leaves its cage, nor is it allowed to play with other calves or even walk – all so that its muscles will not grow strong. Soft muscles mean a soft and juicy steak. The first time the calf has a chance to walk, stretch its muscles and touch other calves is on its way to the slaughterhouse. In evolutionary terms, cattle represent one of the most successful animal species ever to exist. At the same time, they are some of the most miserable animals on the planet.
{Photo and © Anonymous for Animal Rights (Israel).}
Yet from the viewpoint of the herd, rather than that of the shepherd, it’s hard to avoid the impression that for the vast majority of domesticated animals, the Agricultural Revolution was a terrible catastrophe. Their evolutionary ‘success’ is meaningless. A rare wild rhinoceros on the brink of extinction is probably more satisfied than a calf who spends its short life inside a tiny box, fattened to produce juicy steaks. The contented rhinoceros is no less content for being among the last of its kind. The numerical success of the calf’s species is little consolation for the suffering the individual endures.
This discrepancy between evolutionary success and individual suffering is perhaps the most important lesson we can draw from the Agricultural Revolution. When we study the narrative of plants such as wheat and maize, maybe the purely evolutionary perspective makes sense. Yet in the case of animals such as cattle, sheep and Sapiens, each with a complex world of sensations and emotions, we have to consider how evolutionary success translates into individual experience. In the following chapters we will see time and again how a dramatic increase in the collective power and ostensible success of our species went hand in hand with much individual suffering.
THE AGRICULTURAL REVOLUTION IS ONE of the most controversial events in history. Some partisans proclaim that it set humankind on the road to prosperity and progress. Others insist that it led to perdition. This was the turning point, they say, where Sapiens cast off its intimate symbiosis with nature and sprinted towards greed and alienation. Whichever direction the road led, there was no going back. Farming enabled populations to increase so radically and rapidly that no complex agricultural society could ever again sustain itself if it returned to hunting and gathering. Around 10,000 BC, before the transition to agriculture, earth was home to about 5–8 million nomadic foragers. By the first century AD, only 1–2 million foragers remained (mainly in Australia, America and Africa), but their numbers were dwarfed by the world’s 250 million farmers.1
The vast majority of farmers lived in permanent settlements; only a few were nomadic shepherds. Settling down caused most people’s turf to shrink dramatically. Ancient hunter-gatherers usually lived in territories covering many dozens and even hundreds of square miles. ‘Home’ was the entire territory, with its hills, streams, woods and open sky. Peasants, on the other hand, spent most of their days working a small field or orchard, and their domestic lives centred on a cramped structure of wood, stone or mud, measuring no more than a few dozen feet – the house. The typical peasant developed a very strong attachment to this structure. This was a far-reaching revolution, whose impact was psychological as much as architectural. Henceforth, attachment to ‘my house’ and separation from the neighbours became the psychological hallmark of a much more self-centred creature.
The new agricultural territories were not only far smaller than those of ancient foragers, but also far more artificial. Aside from the use of fire, hunter-gatherers made few deliberate changes to the lands in which they roamed. Farmers, on the other hand, lived in artificial human islands that they laboriously carved out of the surrounding wilds. They cut down forests, dug canals, cleared fields, built houses, ploughed furrows, and planted fruit trees in tidy rows. The resulting artificial habitat was meant only for humans and ‘their’ plants and animals, and was often fenced off by walls and hedges. Farmer families did all they could to keep out wayward weeds and wild animals. If such interlopers made their way in, they were driven out. If they persisted, their human antagonists sought ways to exterminate them. Particularly strong defences were erected around the home. From the dawn of agriculture until this very day, billions of humans armed with branches, swatters, shoes and poison sprays have waged relentless war against the diligent ants, furtive roaches, adventurous spiders and misguided beetles that constantly infiltrate the human domicile.
For most of history these man-made enclaves remained very small, surrounded by expanses of untamed nature. The earth’s surface measures about 200 million square miles, of which 60 million is land. As late as AD 1400, the vast majority of farmers, along with their plants and animals, clustered together in an area of just 4.25 million square miles – 2 per cent of the planet’s surface.2 Everywhere else was too cold, too hot, too dry, too wet, or otherwise unsuited for cultivation. This minuscule 2 per cent of the earth’s surface constituted the stage on which history unfolded.
People found it difficult to leave their artificial islands. They could not abandon their houses, fields and granaries without grave risk of loss. Furthermore, as time went on they accumulated more and more things – objects, not easily transportable, that tied them down. Ancient farmers might seem to us dirt poor, but a typical family possessed more artefacts than an entire forager tribe.
While agricultural space shrank, agricultural time expanded. Foragers usually didn’t waste much time thinking about next month or next summer. Farmers sailed in their imagination years and decades into the future.
Foragers discounted the future because they lived from hand to mouth and could only preserve food or accumulate possessions with difficulty. Of course, they clearly engaged in some advanced planning. The creators of the cave paintings of Chauvet, Lascaux and Altamira almost certainly intended them to last for generations. Social alliances and political rivalries were long-term affairs. It often took years to repay a favour or to avenge a wrong. Nevertheless, in the subsistence economy of hunting and gathering, there was an obvious limit to such long-term planning. Paradoxically, it saved foragers a lot of anxieties. There was no sense in worrying about things that they could not influence.
The Agricultural Revolution made the future far more important than it had ever been before. Farmers must always keep the future in mind and must work in its service. The agricultural economy was based on a seasonal cycle of production, comprising long months of cultivation followed by short peak periods of harvest. On the night following the end of a plentiful harvest the peasants might celebrate for all they were worth, but within a week or so they were again up at dawn for a long day in the field. Although there was enough food for today, next week, and even next month, they had to worry about next year and the year after that.
Concern about the future was rooted not only in seasonal cycles of production, but also in the fundamental uncertainty of agriculture. Since most villages lived by cultivating a very limited variety of domesticated plants and animals, they were at the mercy of droughts, floods and pestilence. Peasants were obliged to produce more than they consumed so that they could build up reserves. Without grain in the silo, jars of olive oil in the cellar, cheese in the pantry and sausages hanging from the rafters, they would starve in bad years. And bad years were bound to come, sooner or later. A peasant living on the assumption that bad years would not come didn’t live long.
Consequently, from the very advent of agriculture, worries about the future became major players in the theatre of the human mind. Where farmers depended on rains to water their fields, the onset of the rainy season meant that each morning the farmers gazed towards the horizon, sniffing the wind and straining their eyes. Is that a cloud? Would the rains come on time? Would there be enough? Would violent storms wash the seeds from the fields and batter down seedlings? Meanwhile, in the valleys of the Euphrates, Indus and Yellow rivers, other peasants monitored, with no less trepidation, the height of the water. They needed the rivers to rise in order to spread the fertile topsoil washed down from the highlands, and to enable their vast irrigation systems to fill with water. But floods that surged too high or came at the wrong time could destroy their fields as much as a drought.
Peasants were worried about the future not just because they had more cause for worry, but also because they could do something about it. They could clear another field, dig another irrigation canal, sow more crops. The anxious peasant was as frenetic and hard-working as a harvester ant in the summer, sweating to plant olive trees whose oil would be pressed by his children and grandchildren, putting off until the winter or the following year the eating of the food he craved today.
The stress of farming had far-reaching consequences. It was the foundation of large-scale political and social systems. Sadly, the diligent peasants almost never achieved the future economic security they so craved through their hard work in the present. Everywhere, rulers and elites sprang up, living off the peasants’ surplus food and leaving them with only a bare subsistence.
These forfeited food surpluses fuelled politics, wars, art and philosophy. They built palaces, forts, monuments and temples. Until the late modern era, more than 90 percent of humans were peasants who rose each morning to till the land by the sweat of their brows. The extra they produced fed the tiny minority of elites – kings, government officials, soldiers, priests, artists and thinkers – who fill the history books. History is something that very few people have been doing while everyone else was ploughing fields and carrying water buckets.
The food surpluses produced by peasants, coupled with new transportation technology, eventually enabled more and more people to cram together first into large villages, then into towns, and finally into cities, all of them joined together by new kingdoms and commercial networks.
Yet in order to take advantage of these new opportunities, food surpluses and improved transportation were not enough. The mere fact that one can feed a thousand people in the same town or a million people in the same kingdom does not guarantee that they can agree how to divide the land and water, how to settle disputes and conflicts, and how to act in times of drought or war. And if no agreement can be reached, strife spreads, even if the storehouses are bulging. It was not food shortages that caused most of history’s wars and revolutions. The French Revolution was spearheaded by affluent lawyers, not by famished peasants. The Roman Republic reached the height of its power in the first century BC, when treasure fleets from throughout the Mediterranean enriched the Romans beyond their ancestors’ wildest dreams. Yet it was at that moment of maximum affluence that the Roman political order collapsed into a series of deadly civil wars. Yugoslavia in 1991 had more than enough resources to feed all its inhabitants, and still disintegrated into a terrible bloodbath.
The problem at the root of such calamities is that humans evolved for millions of years in small bands of a few dozen individuals. The handful of millennia separating the Agricultural Revolution from the appearance of cities, kingdoms and empires was not enough time to allow an instinct for mass cooperation to evolve.
Despite the lack of such biological instincts, during the foraging era, hundreds of strangers were able to cooperate thanks to their shared myths. However, this cooperation was loose and limited. Every Sapiens band continued to run its life independently and to provide for most of its own needs. An archaic sociologist living 20,000 years ago, who had no knowledge of events following the Agricultural Revolution, might well have concluded that mythology had a fairly limited scope. Stories about ancestral spirits and tribal totems were strong enough to enable 500 people to trade seashells, celebrate the odd festival, and join forces to wipe out a Neanderthal band, but no more than that. Mythology, the ancient sociologist would have thought, could not possibly enable millions of strangers to cooperate on a daily basis.
But that turned out to be wrong. Myths, it transpired, are stronger than anyone could have imagined. When the Agricultural Revolution opened opportunities for the creation of crowded cities and mighty empires, people invented stories about great gods, motherlands and joint stock companies to provide the needed social links. While human evolution was crawling at its usual snail’s pace, the human imagination was building astounding networks of mass cooperation, unlike any other ever seen on earth.
Around 8500 BC the largest settlements in the world were villages such as Jericho, which contained a few hundred individuals. By 7000 BC the town of Çatalhöyük in Anatolia numbered between 5,000 and 10,000 individuals. It may well have been the world’s biggest settlement at the time. During the fifth and fourth millennia BC, cities with tens of thousands of inhabitants sprouted in the Fertile Crescent, and each of these held sway over many nearby villages. In 3100 BC the entire lower Nile Valley was united into the first Egyptian kingdom. Its pharaohs ruled thousands of square miles and hundreds of thousands of people. Around 2250 BC Sargon the Great forged the first empire, the Akkadian. It boasted over a million subjects and a standing army of 5,400 soldiers. Between 1000 BC and 500 BC, the first mega-empires appeared in the Middle East: the Late Assyrian Empire, the Babylonian Empire, and the Persian Empire. They ruled over many millions of subjects and commanded tens of thousands of soldiers.
In 221 BC the Qin dynasty united China, and shortly afterwards Rome united the Mediterranean basin. Taxes levied on 40 million Qin subjects paid for a standing army of hundreds of thousands of soldiers and a complex bureaucracy that employed more than 100,000 officials. The Roman Empire at its zenith collected taxes from up to 100 million subjects. This revenue financed a standing army of 250,000–500,000 soldiers, a road network still in use 1,500 years later, and theatres and amphitheatres that host spectacles to this day.
16. A stone stela inscribed with the Code of Hammurabi, c.1776 BC.
{© De Agostini Picture Library/G. Dagli Orti/The Bridgeman Art Library.}
Impressive, no doubt, but we mustn’t harbour rosy illusions about ‘mass cooperation networks’ operating in pharaonic Egypt or the Roman Empire. ‘Cooperation’ sounds very altruistic, but is not always voluntary and seldom egalitarian. Most human cooperation networks have been geared towards oppression and exploitation. The peasants paid for the burgeoning cooperation networks with their precious food surpluses, despairing when the tax collector wiped out an entire year of hard labour with a single stroke of his imperial pen. The famed Roman amphitheatres were often built by slaves so that wealthy and idle Romans could watch other slaves engage in vicious gladiatorial combat. Even prisons and concentration camps are cooperation networks, and can function only because thousands of strangers somehow manage to coordinate their actions.
17. The Declaration of Independence of the United States, signed 4 July 1776.
{Declaration Stone Engraving, courtesy of the National Archives and Records Administration, Washington, DC, “Charters of Freedom.”}
All these cooperation networks – from the cities of ancient Mesopotamia to the Qin and Roman empires – were ‘imagined orders’. The social norms that sustained them were based neither on ingrained instincts nor on personal acquaintances, but rather on belief in shared myths.
How can myths sustain entire empires? We have already discussed one such example: Peugeot. Now let’s examine two of the best-known myths of history: the Code of Hammurabi of c.1776 BC, which served as a cooperation manual for hundreds of thousands of ancient Babylonians; and the American Declaration of Independence of 1776 AD, which today still serves as a cooperation manual for hundreds of millions of modern Americans.
In 1776 BC Babylon was the world’s biggest city. The Babylonian Empire was probably the world’s largest, with more than a million subjects. It ruled most of Mesopotamia, including the bulk of modern Iraq and parts of present-day Syria and Iran. The Babylonian king most famous today was Hammurabi. His fame is due primarily to the text that bears his name, the Code of Hammurabi. This was a collection of laws and judicial decisions whose aim was to present Hammurabi as a role model of a just king, serve as a basis for a more uniform legal system across the Babylonian Empire, and teach future generations what justice is and how a just king acts.
Future generations took notice. The intellectual and bureaucratic elite of ancient Mesopotamia canonised the text, and apprentice scribes continued to copy it long after Hammurabi died and his empire lay in ruins. Hammurabi’s Code is therefore a good source for understanding the ancient Mesopotamians’ ideal of social order.3
The text begins by saying that the gods Anu, Enlil and Marduk – the leading deities of the Mesopotamian pantheon – appointed Hammurabi ‘to make justice prevail in the land, to abolish the wicked and the evil, to prevent the strong from oppressing the weak’.4 It then lists about 300 judgements, given in the set formula ‘If such and such a thing happens, such is the judgment.’ For example, judgements 196–9 and 209–14 read:
196. |
If a superior man should blind the eye of another superior man, they shall blind his eye. |
197. |
If he should break the bone of another superior man, they shall break his bone. |
198. |
If he should blind the eye of a commoner or break the bone of a commoner, he shall weigh and deliver 60 shekels of silver. |
199. |
If he should blind the eye of a slave of a superior man or break the bone of a slave of a superior man, he shall weigh and deliver one-half of the slave’s value (in silver).5 |
209. |
If a superior man strikes a woman of superior class and thereby causes her to miscarry her fetus, he shall weigh and deliver ten shekels of silver for her fetus. |
210. |
If that woman should die, they shall kill his daughter. |
211. |
If he should cause a woman of commoner class to miscarry her fetus by the beating, he shall weigh and deliver five shekels of silver. |
|
212. |
If that woman should die, he shall weigh and deliver thirty shekels of silver. |
213. |
If he strikes a slave-woman of a superior man and thereby causes her to miscarry her fetus, he shall weigh and deliver two shekels of silver. |
214. |
If that slave-woman should die, he shall weigh and deliver twenty shekels of silver.6 |
After listing his judgements, Hammurabi again declares that
These are the just decisions which Hammurabi, the able king, has established and thereby has directed the land along the course of truth and the correct way of life . . . I am Hammurabi, noble king. I have not been careless or negligent toward humankind, granted to my care by the god Enlil, and with whose shepherding the god Marduk charged me.7
Hammurabi’s Code asserts that Babylonian social order is rooted in universal and eternal principles of justice, dictated by the gods. The principle of hierarchy is of paramount importance. According to the code, people are divided into two genders and three classes: superior people, commoners and slaves. Members of each gender and class have different values. The life of a female commoner is worth thirty silver shekels and that of a slave-woman twenty silver shekels, whereas the eye of a male commoner is worth sixty silver shekels.
The code also establishes a strict hierarchy within families, according to which children are not independent persons, but rather the property of their parents. Hence, if one superior man kills the daughter of another superior man, the killer’s daughter is executed in punishment. To us it may seem strange that the killer remains unharmed whereas his innocent daughter is killed, but to Hammurabi and the Babylonians this seemed perfectly just. Hammurabi’s Code was based on the premise that if the king’s subjects all accepted their positions in the hierarchy and acted accordingly, the empire’s million inhabitants would be able to cooperate effectively. Their society could then produce enough food for its members, distribute it efficiently, protect itself against its enemies, and expand its territory so as to acquire more wealth and better security.
About 3,500 years after Hammurabi’s death, the inhabitants of thirteen British colonies in North America felt that the king of England was treating them unjustly. Their representatives gathered in the city of Philadelphia, and on 4 July 1776 the colonies declared that their inhabitants were no longer subjects of the British Crown. Their Declaration of Independence proclaimed universal and eternal principles of justice, which, like those of Hammurabi, were inspired by a divine power. However, the most important principle dictated by the American god was somewhat different from the principle dictated by the gods of Babylon. The American Declaration of Independence asserts that:
We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable rights, that among these are life, liberty, and the pursuit of happiness.
Like Hammurabi’s Code, the American founding document promises that if humans act according to its sacred principles, millions of them would be able to cooperate effectively, living safely and peacefully in a just and prosperous society. Like the Code of Hammurabi, the American Declaration of Independence was not just a document of its time and place – it was accepted by future generations as well. For more than 200 years, American schoolchildren have been copying and learning it by heart.
The two texts present us with an obvious dilemma. Both the Code of Hammurabi and the American Declaration of Independence claim to outline universal an eternal principles of justice, but according to the Americans all people are equal, whereas according to the Babylonians people are decidedly unequal. The Americans would, of course, say that they are right, and that Hammurabi is wrong. Hammurabi, naturally, would retort that he is right, and that the Americans are wrong. In fact, they are both wrong. Hammurabi and the American Founding Fathers alike imagined a reality governed by universal and immutable principles of justice, such as equality or hierarchy. Yet the only place where such universal principles exist is in the fertile imagination of Sapiens, and in the myths they invent and tell one another. These principles have no objective validity.
It is easy for us to accept that the division of people into ‘superiors’ and ‘commoners’ is a figment of the imagination. Yet the idea that all humans are equal is also a myth. In what sense do all humans equal one another? Is there any objective reality, outside the human imagination, in which we are truly equal? Are all humans equal to one another biologically? Let us try to translate the most famous line of the American Declaration of Independence into biological terms:
We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable rights, that among these are life, liberty, and the pursuit of happiness.
According to the science of biology, people were not ‘created’. They have evolved. And they certainly did not evolve to be ‘equal’. The idea of equality is inextricably intertwined with the idea of creation. The Americans got the idea of equality from Christianity, which argues that every person has a divinely created soul, and that all souls are equal before God. However, if we do not believe in the Christian myths about God, creation and souls, what does it mean that all people are ‘equal’? Evolution is based on difference, not on equality. Every person carries a somewhat different genetic code, and is exposed from birth to different environmental influences. This leads to the development of different qualities that carry with them different chances of survival. ‘Created equal’ should therefore be translated into ‘evolved differently’.
Just as people were never created, neither, according to the science of biology, is there a ‘Creator’ who ‘endows’ them with anything. There is only a blind evolutionary process, devoid of any purpose, leading to the birth of individuals. ‘Endowed by their creator’ should be translated simply into ‘born’.
Equally, there are no such things as rights in biology. There are only organs, abilities and characteristics. Birds fly not because they have a right to fly, but because they have wings. And it’s not true that these organs, abilities and characteristics are ‘unalienable’. Many of them undergo constant mutations, and may well be completely lost over time. The ostrich is a bird that lost its ability to fly. So ‘unalienable rights’ should be translated into ‘mutable characteristics’.
And what are the characteristics that evolved in humans? ‘Life’, certainly. But ‘liberty’? There is no such thing in biology. Just like equality, rights and limited liability companies, liberty is something that people invented and that exists only in their imagination. From a biological viewpoint, it is meaningless to say that humans in democratic societies are free, whereas humans in dictatorships are unfree. And what about ‘happiness’? So far biological research has failed to come up with a clear definition of happiness or a way to measure it objectively. Most biological studies acknowledge only the existence of pleasure, which is more easily defined and measured. So ‘life, liberty, and the pursuit of happiness’ should be translated into ‘life and the pursuit of pleasure’.
So here is that line from the American Declaration of Independence translated into biological terms:
We hold these truths to be self-evident, that all men evolved differently, that they are born with certain mutable characteristics, and that among these are life and the pursuit of pleasure.
Advocates of equality and human rights may be outraged by this line of reasoning. Their response is likely to be, ‘We know that people are not equal biologically! But if we believe that we are all equal in essence, it will enable us to create a stable and prosperous society.’ I have no argument with that. This is exactly what I mean by ‘imagined order’. We believe in a particular order not because it is objectively true, but because believing in it enables us to cooperate effectively and forge a better society. Imagined orders are not evil conspiracies or useless mirages. Rather, they are the only way large numbers of humans can cooperate effectively. Bear in mind, though, that Hammurabi might have defended his principle of hierarchy using the same logic: ‘I know that superiors, commoners and slaves are not inherently different kinds of people. But if we believe that they are, it will enable us to create a stable and prosperous society.’
It’s likely that more than a few readers squirmed in their chairs while reading the preceding paragraphs. Most of us today are educated to react in such a way. It is easy to accept that Hammurabi’s Code was a myth, but we do not want to hear that human rights are also a myth. If people realise that human rights exist only in the imagination, isn’t there a danger that our society will collapse? Voltaire said about God that ‘there is no God, but don’t tell that to my servant, lest he murder me at night’. Hammurabi would have said the same about his principle of hierarchy, and Thomas Jefferson about human rights. Homo sapiens has no natural rights, just as spiders, hyenas and chimpanzees have no natural rights. But don’t tell that to our servants, lest they murder us at night.
Such fears are well justified. A natural order is a stable order. There is no chance that gravity will cease to function tomorrow, even if people stop believing in it. In contrast, an imagined order is always in danger of collapse, because it depends upon myths, and myths vanish once people stop believing in them. In order to safeguard an imagined order, continuous and strenuous efforts are imperative. Some of these efforts take the shape of violence and coercion. Armies, police forces, courts and prisons are ceaselessly at work forcing people to act in accordance with the imagined order. If an ancient Babylonian blinded his neighbour, some violence was usually necessary in order to enforce the law of ‘an eye for an eye’. When, in 1860, a majority of American citizens concluded that African slaves are human beings and must therefore enjoy the right of liberty, it took a bloody civil war to make the southern states acquiesce.
However, an imagined order cannot be sustained by violence alone. It requires some true believers as well. Prince Talleyrand, who began his chameleon-like career under Louis XVI, later served the revolutionary and Napoleonic regimes, and switched loyalties in time to end his days working for the restored monarchy, summed up decades of governmental experience by saying that ‘You can do many things with bayonets, but it is rather uncomfortable to sit on them.’ A single priest often does the work of a hundred soldiers – far more cheaply and effectively. Moreover, no matter how efficient bayonets are, somebody must wield them. Why should the soldiers, jailors, judges and police maintain an imagined order in which they do not believe? Of all human collective activities, the one most difficult to organise is violence. To say that a social order is maintained by military force immediately raises the question: what maintains the military order? It is impossible to organise an army solely by coercion. At least some of the commanders and soldiers must truly believe in something, be it God, honour, motherland, manhood or money.
An even more interesting question concerns those standing at the top of the social pyramid. Why should they wish to enforce an imagined order if they themselves don’t believe in it? It is quite common to argue that the elite may do so out of cynical greed. Yet a cynic who believes in nothing is unlikely to be greedy. It does not take much to provide the objective biological needs of Homo sapiens. After those needs are met, more money can be spent on building pyramids, taking holidays around the world, financing election campaigns, funding your favourite terrorist organisation, or investing in the stock market and making yet more money – all of which are activities that a true cynic would find utterly meaningless. Diogenes, the Greek philosopher who founded the Cynical school, lived in a barrel. When Alexander the Great once visited Diogenes as he was relaxing in the sun, and asked if there were anything he might do for him, the Cynic answered the all-powerful conqueror, ‘Yes, there is something you can do for me. Please move a little to the side. You are blocking the sunlight.’
This is why cynics don’t build empires and why an imagined order can be maintained only if large segments of the population – and in particular large segments of the elite and the security forces – truly believe in it. Christianity would not have lasted 2,000 years if the majority of bishops and priests failed to believe in Christ. American democracy would not have lasted almost 250 years if the majority of presidents and congressmen failed to believe in human rights. The modern economic system would not have lasted a single day if the majority of investors and bankers failed to believe in capitalism.
How do you cause people to believe in an imagined order such as Christianity, democracy or capitalism? First, you never admit that the order is imagined. You always insist that the order sustaining society is an objective reality created by the great gods or by the laws of nature. People are unequal, not because Hammurabi said so, but because Enlil and Marduk decreed it. People are equal, not because Thomas Jefferson said so, but because God created them that way. Free markets are the best economic system, not because Adam Smith said so, but because these are the immutable laws of nature.
You also educate people thoroughly. From the moment they are born, you constantly remind them of the principles of the imagined order, which are incorporated into anything and everything. They are incorporated into fairy tales, dramas, paintings, songs, etiquette, political propaganda, architecture, recipes and fashions. For example, today people believe in equality, so it’s fashionable for rich kids to wear jeans, which were originally working-class attire. In the Middle Ages people believed in class divisions, so no young nobleman would have worn a peasant’s smock. Back then, to be addressed as ‘Sir’ or ‘Madam’ was a rare privilege reserved for the nobility, and often purchased with blood. Today all polite correspondence, regardless of the recipient, begins with ‘Dear Sir or Madam’.
The humanities and social sciences devote most of their energies to explaining exactly how the imagined order is woven into the tapestry of life. In the limited space at our disposal we can only scratch the surface. Three main factors prevent people from realising that the order organising their lives exists only in their imagination:
a. The imagined order is embedded in the material world. Though the imagined order exists only in our minds, it can be woven into the material reality around us, and even set in stone. Most Westerners today believe in individualism. They believe that every human is an individual, whose worth does not depend on what other people think of him or her. Each of us has within ourselves a brilliant ray of light that gives value and meaning to our lives. In modern Western schools teachers and parents tell children that if their classmates make fun of them, they should ignore it. Only they themselves, not others, know their true worth.
In modern architecture, this myth leaps out of the imagination to take shape in stone and mortar. The ideal modern house is divided into many small rooms so that each child can have a private space, hidden from view, providing for maximum autonomy. This private room almost invariably has a door, and in some households it may be accepted practice for the child to close, and perhaps lock, the door. Even parents may be forbidden to enter without knocking and asking permission. The room is usually decorated as the child sees fit, with rock-star posters on the wall and dirty socks on the floor. Somebody growing up in such a space cannot help but imagine himself ‘an individual’, his true worth emanating from within rather than from without.
Medieval noblemen did not believe in individualism. Someone’s worth was determined by their place in the social hierarchy, and by what other people said about them. Being laughed at was a horrible indignity. Noblemen taught their children to protect their good name whatever the cost. Like modern individualism, the medieval value system left the imagination and was manifested in the stone of medieval castles. The castle rarely contained private rooms for children (or anyone else, for that matter). The teenage son of a medieval baron did not have a private room on the castle’s second floor, with posters of Richard the Lionheart and King Arthur on the walls and a locked door that his parents were not allowed to open. He slept alongside many other youths in a large hall. He was always on display and always had to take into account what others saw and said. Someone growing up in such conditions naturally concluded that a man’s true worth was determined by his place in the social hierarchy and by what other people said of him.8
b. The imagined order shapes our desires. Most people do not wish to accept that the order governing their lives is imaginary, but in fact every person is born into a pre-existing imagined order, and his or her desires are shaped from birth by its dominant myths. Our personal desires thereby become the imagined order’s most important defences.
For instance, the most cherished desires of present-day Westerners are shaped by romantic, nationalist, capitalist and humanist myths that have been around for centuries. Friends giving advice often tell each other, ‘Follow your heart.’ But the heart is a double agent that usually takes its instructions from the dominant myths of the day, and the very recommendation to ‘Follow your heart’ was implanted in our minds by a combination of nineteenth-century Romantic myths and twentieth-century consumerist myths. The Coca-Cola Company, for example, has marketed Diet Coke around the world under the slogan, ‘Diet Coke. Do what feels good.’
Even what people take to be their most personal desires are usually programmed by the imagined order. Let’s consider, for example, the popular desire to take a holiday abroad. There is nothing natural or obvious about this. A chimpanzee alpha male would never think of using his power in order to go on holiday into the territory of a neighbouring chimpanzee band. The elite of ancient Egypt spent their fortunes building pyramids and having their corpses mummified, but none of them thought of going shopping in Babylon or taking a skiing holiday in Phoenicia. People today spend a great deal of money on holidays abroad because they are true believers in the myths of romantic consumerism.
Romanticism tells us that in order to make the most of our human potential we must have as many different experiences as we can. We must open ourselves to a wide spectrum of emotions; we must sample various kinds of relationships; we must try different cuisines; we must learn to appreciate different styles of music. One of the best ways to do all that is to break free from our daily routine, leave behind our familiar setting, and go travelling in distant lands, where we can ‘experience’ the culture, the smells, the tastes and the norms of other people. We hear again and again the romantic myths about ‘how a new experience opened my eyes and changed my life’.
Consumerism tells us that in order to be happy we must consume as many products and services as possible. If we feel that something is missing or not quite right, then we probably need to buy a product (a car, new clothes, organic food) or a service (housekeeping, relationship therapy, yoga classes). Every television commercial is another little legend about how consuming some product or service will make life better.
Romanticism, which encourages variety, meshes perfectly with consumerism. Their marriage has given birth to the infinite ‘market of experiences’, on which the modern tourism industry is founded. The tourism industry does not sell flight tickets and hotel bedrooms. It sells experiences. Paris is not a city, nor India a country – they are both experiences, the consumption of which is supposed to widen our horizons, fulfil our human potential, and make us happier. Consequently, when the relationship between a millionaire and his wife is going through a rocky patch, he takes her on an expensive trip to Paris. The trip is not a reflection of some independent desire, but rather of an ardent belief in the myths of romantic consumerism. A wealthy man in ancient Egypt would never have dreamed of solving a relationship crisis by taking his wife on holiday to Babylon. Instead, he might have built for her the sumptuous tomb she had always wanted.
18. The Great Pyramid of Giza. The kind of thing rich people in ancient Egypt did with their money.
{© Adam Jones/Corbis.}
Like the elite of ancient Egypt, most people in most cultures dedicate their lives to building pyramids. Only the names, shapes and sizes of these pyramids change from one culture to the other. They may take the form, for example, of a suburban cottage with a swimming pool and an evergreen lawn, or a gleaming penthouse with an enviable view. Few question the myths that cause us to desire the pyramid in the first place.
c. The imagined order is inter-subjective. Even if by some superhuman effort I succeed in freeing my personal desires from the grip of the imagined order, I am just one person. In order to change the imagined order I must convince millions of strangers to cooperate with me. For the imagined order is not a subjective order existing in my own imagination – it is rather an inter-subjective order, existing in the shared imagination of thousands and millions of people.
In order to understand this, we need to understand the difference between ‘objective’, ‘subjective’, and ‘inter-subjective’.
An objective phenomenon exists independently of human consciousness and human beliefs. Radioactivity, for example, is not a myth. Radioactive emissions occurred long before people discovered them, and they are dangerous even when people do not believe in them. Marie Curie, one of the discoverers of radioactivity, did not know, during her long years of studying radioactive materials, that they could harm her body. While she did not believe that radioactivity could kill her, she nevertheless died of aplastic anaemia, a disease caused by overexposure to radioactive materials.
The subjective is something that exists depending on the consciousness and beliefs of a single individual. It disappears or changes if that particular individual changes his or her beliefs. Many a child believes in the existence of an imaginary friend who is invisible and inaudible to the rest of the world. The imaginary friend exists solely in the child’s subjective consciousness, and when the child grows up and ceases to believe in it, the imaginary friend fades away.
The inter-subjective is something that exists within the communication network linking the subjective consciousness of many individuals. If a single individual changes his or her beliefs, or even dies, it is of little importance. However, if most individuals in the network die or change their beliefs, the inter-subjective phenomenon will mutate or disappear. Inter-subjective phenomena are neither malevolent frauds nor insignificant charades. They exist in a different way from physical phenomena such as radioactivity, but their impact on the world may still be enormous. Many of history’s most important drivers are inter-subjective: law, money, gods, nations.
Peugeot, for example, is not the imaginary friend of Peugeot’s CEO. The company exists in the shared imagination of millions of people. The CEO believes in the company’s existence because the board of directors also believes in it, as do the company’s lawyers, the secretaries in the nearby office, the tellers in the bank, the brokers on the stock exchange, and car dealers from France to Australia. If the CEO alone were suddenly to stop believing in Peugeot’s existence, he’d quickly land in the nearest mental hospital and someone else would occupy his office.
Similarly, the dollar, human rights and the United States of America exist in the shared imagination of billions, and no single individual can threaten their existence. If I alone were to stop believing in the dollar, in human rights, or in the United States, it wouldn’t much matter. These imagined orders are inter-subjective, so in order to change them we must simultaneously change the consciousness of billions of people, which is not easy. A change of such magnitude can be accomplished only with the help of a complex organisation, such as a political party, an ideological movement, or a religious cult. However, in order to establish such complex organisations, it’s necessary to convince many strangers to cooperate with one another. And this will happen only if these strangers believe in some shared myths. It follows that in order to change an existing imagined order, we must first believe in an alternative imagined order.
In order to dismantle Peugeot, for example, we need to imagine something more powerful, such as the French legal system. In order to dismantle the French legal system we need to imagine something even more powerful, such as the French state. And if we would like to dismantle that too, we will have to imagine something yet more powerful.
There is no way out of the imagined order. When we break down our prison walls and run towards freedom, we are in fact running into the more spacious exercise yard of a bigger prison.
EVOLUTION DID NOT ENDOW HUMANS with the ability to play pick-up basketball. True, it produced legs for running, hands for dribbling, and shoulders for fouling, but all that this enables us to do is shoot hoops by ourselves. To get into a game with the strangers we find in the schoolyard on any given afternoon, we not only have to work in concert with four teammates we may never have met before—we also need to know that the five players on the opposing team are playing by the same rules. Other animals that engage strangers in ritualized aggression do so largely by instinct—puppies throughout the world have the rules for rough-and-tumble play hard-wired into their genes. But American teenagers have no genes for pick-up basketball. They can nevertheless play the game with complete strangers because they have all learned an identical set of ideas about basketball. These ideas are entirely imaginary, but if everyone shares them, we can all play the game.
The same applies, on a larger scale, to kingdoms, churches, and trade networks, with one important difference. The rules of basketball are relatively simple and concise, much like those necessary for cooperation in a forager band or small village. Each player can easily store them in his brain and still have room for songs, images, and shopping lists. But large systems of cooperation that involve not ten but thousands or even millions of humans require the handling and storage of huge amounts of information, much more than any single human brain can contain and process.
The large societies found in some other species, such as ants and bees, are stable and resilient because most of the information needed to sustain them is encoded in the genome. A female honeybee larva can, for example, grow up to be either a queen or a worker, depending on what food it is fed. Its DNA programmes the necessary behaviours for whatever role it will fulfil in life. Hives can be very complex social structures, containing many different kinds of workers, such as harvesters, nurses and cleaners. But so far researchers have failed to locate lawyer bees. Bees don’t need lawyers, because there is no danger that they might forget or violate the hive constitution. The queen does not cheat the cleaner bees of their food, and they never go on strike demanding higher wages.
But humans do such things all the time. Because the Sapiens social order is imagined, humans cannot preserve the critical information for running it simply by making copies of their DNA and passing these on to their progeny. A conscious effort has to be made to sustain laws, customs, procedures and manners, otherwise the social order would quickly collapse. For example, King Hammurabi decreed that people are divided into superiors, commoners and slaves. Unlike the beehive class system, this is not a natural division – there is no trace of it in the human genome. If the Babylonians could not keep this ‘truth’ in mind, their society would have ceased to function. Similarly, when Hammurabi passed his DNA to his offspring, it did not encode his ruling that a superior man who killed a commoner woman must pay thirty silver shekels. Hammurabi deliberately had to instruct his sons in the laws of his empire, and his sons and grandsons had to do the same.
Empires generate huge amounts of information. Beyond laws, empires have to keep accounts of transactions and taxes, inventories of military supplies and merchant vessels, and calendars of festivals and victories. For millions of years people stored information in a single place – their brains. Unfortunately, the human brain is not a good storage device for empire-sized databases, for three main reasons.
First, its capacity is limited. True, some people have astonishing memories, and in ancient times there were memory professionals who could store in their heads the topographies of whole provinces and the law codes of entire states. Nevertheless, there is a limit that even master mnemonists cannot transcend. A lawyer might know by heart the entire law code of the Commonwealth of Massachusetts, but not the details of every legal proceeding that took place in Massachusetts from the Salem witch trials onward.
Secondly, humans die, and their brains die with them. Any information stored in a brain will be erased in less than a century. It is, of course, possible to pass memories from one brain to another, but after a few transmissions, the information tends to get garbled or lost.
Thirdly and most importantly, the human brain has been adapted to store and process only particular types of information. In order to survive, ancient hunter-gatherers had to remember the shapes, qualities and behaviour patterns of thousands of plant and animal species. They had to remember that a wrinkled yellow mushroom growing in autumn under an elm tree is most probably poisonous, whereas a similar-looking mushroom growing in winter under an oak tree is a good stomach-ache remedy. Hunter-gatherers also had to bear in mind the opinions and relations of several dozen band members. If Lucy needed a band member’s help to get John to stop harassing her, it was important for her to remember that John had fallen out last week with Mary, who would thus be a likely and enthusiastic ally. Consequently, evolutionary pressures have adapted the human brain to store immense quantities of botanical, zoological, topographical and social information.
But when particularly complex societies began to appear in the wake of the Agricultural Revolution, a completely new type of information became vital – numbers. Foragers were never obliged to handle large amounts of mathematical data. No forager needed to remember, say, the number of fruit on each tree in the forest. So human brains did not adapt to storing and processing numbers. Yet in order to maintain a large kingdom, mathematical data was vital. It was never enough to legislate laws and tell stories about guardian gods. One also had to collect taxes. In order to tax hundreds of thousands of people, it was imperative to collect data about people’s incomes and possessions; data about payments made; data about arrears, debts and fines; data about discounts and exemptions. This added up to millions of data bits, which had to be stored and processed. Without this capacity, the state would never know what resources it had and what further resources it could tap. When confronted with the need to memorise, recall and handle all these numbers, most human brains overdosed or fell asleep.
This mental limitation severely constrained the size and complexity of human collectives. When the amount of people and property in a particular society crossed a critical threshold, it became necessary to store and process large amounts of mathematical data. Since the human brain could not do it, the system collapsed. For thousands of years after the Agricultural Revolution, human social networks remained relatively small and simple.
The first to overcome the problem were the ancient Sumerians, who lived in southern Mesopotamia. There, a scorching sun beating upon rich muddy plains produced plentiful harvests and prosperous towns. As the number of inhabitants grew, so did the amount of information required to coordinate their affairs. Between the years 3500 BC and 3000 BC, some unknown Sumerian geniuses invented a system for storing and processing information outside their brains, one that was custom-built to handle large amounts of mathematical data. The Sumerians thereby released their social order from the limitations of the human brain, opening the way for the appearance of cities, kingdoms and empires. The data-processing system invented by the Sumerians is called ‘writing’.
Writing is a method for storing information through material signs. The Sumerian writing system did so by combining two types of signs, which were pressed in clay tablets. One type of signs represented numbers. There were signs for 1, 10, 60, 600, 3,600 and 36,000. (The Sumerians used a combination of base-6 and base-10 numeral systems. Their base-6 system bestowed on us several important legacies, such as the division of the day into twenty-four hours and of the circle into 360 degrees.) The other type of signs represented people, animals, merchandise, territories, dates and so forth. By combining both types of signs the Sumerians were able to preserve far more data than any human brain could remember or any DNA chain could encode.
19. A clay tablet with an administrative text from the city of Uruk, c.3400–3000 BC. ‘Kushim’ may be the generic title of an officeholder, or the name of a particular individual. If Kushim was indeed a person, he may be the first individual in history whose name is known to us! All the names applied earlier in human history – the Neanderthals, the Natufians, Chauvet Cave, Göbekli Tepe – are modern inventions. We have no idea what the builders of Göbekli Tepe actually called the place. With the appearance of writing, we are beginning to hear history through the ears of its protagonists. When Kushim’s neighbours called out to him, they might really have shouted ‘Kushim!’ It is telling that the first recorded name in history belongs to an accountant, rather than a prophet, a poet or a great conqueror.1
{© The Schøyen Collection, Oslo and London, MS 1717. http://www.schoyencollection.com/.}
At this early stage, writing was limited to facts and figures. The great Sumerian novel, if there ever was one, was never committed to clay tablets. Writing was time-consuming and the reading public tiny, so no one saw any reason to use it for anything other than essential record-keeping. If we look for the first words of wisdom reaching us from our ancestors, 5,000 years ago, we’re in for a big disappointment. The earliest messages our ancestors have left us read, for example, ‘29,086 measures barley 37 months Kushim.’ The most probable reading of this sentence is: ‘A total of 29,086 measures of barley were received over the course of 37 months. Signed, Kushim.’ Alas, the first texts of history contain no philosophical insights, no poetry, legends, laws, or even royal triumphs. They are humdrum economic documents, recording the payment of taxes, the accumulation of debts and the ownership of property.
Partial script cannot express the entire spectrum of a spoken language, but it can express things that fall outside the scope of spoken language. Partial scripts such as the Sumerian and mathematical scripts cannot be used to write poetry, but they can keep tax accounts very effectively.
Only one other type of text survived from these ancient days, and it is even less exciting: lists of words, copied over and over again by apprentice scribes as training exercises. Even had a bored student wanted to write out some of his poems instead of copy a bill of sale, he could not have done so. The earliest Sumerian writing was a partial rather than a full script. Full script is a system of material signs that can represent spoken language more or less completely. It can therefore express everything people can say, including poetry. Partial script, on the other hand, is a system of material signs that can represent only particular types of information, belonging to a limited field of activity. Latin script, ancient Egyptian hieroglyphics and Braille are full scripts. You can use them to write tax registers, love poems, history books, food recipes and business law. In contrast, the earliest Sumerian script, like modern mathematical symbols and musical notation, are partial scripts. You can use mathematical script to make calculations, but you cannot use it to write love poems.
20. A man holding a quipu, as depicted in a Spanish manuscript following the fall of the Inca Empire.
{Manuscript: History of the Inca Kingdom, Nueva Coronica y buen Gobierno, c.1587, illustrations by Guaman Poma de Ayala, Peru © The Art Archive/Archaeological Museum Lima/Gianni Dagli Orti (ref: AA365957).}
It didn’t disturb the Sumerians that their script was ill-suited for writing poetry. They didn’t invent it in order to copy spoken language, but rather to do things that spoken language failed at. There were some cultures, such as those of the pre-Columbian Andes, which used only partial scripts throughout their entire histories, unfazed by their scripts’ limitations and feeling no need for a full version. Andean script was very different from its Sumerian counterpart. In fact, it was so different that many people would argue it wasn’t a script at all. It was not written on clay tablets or pieces of paper. Rather, it was written by tying knots on colourful cords called quipus. Each quipu consisted of many cords of different colours, made of wool or cotton. On each cord, several knots were tied in different places. A single quipu could contain hundreds of cords and thousands of knots. By combining different knots on different cords with different colours, it was possible to record large amounts of mathematical data relating to, for example, tax collection and property ownership.2
For hundreds, perhaps thousands of years, quipus were essential to the business of cities, kingdoms and empires.3 They reached their full potential under the Inca Empire, which ruled 10–12 million people and covered today’s Peru, Ecuador and Bolivia, as well as chunks of Chile, Argentina and Colombia. Thanks to quipus, the Incas could save and process large amounts of data, without which they would not have been able to maintain the complex administrative machinery that an empire of that size requires.
In fact, quipus were so effective and accurate that in the early years following the Spanish conquest of South America, the Spaniards themselves employed quipus in the work of administering their new empire. The problem was that the Spaniards did not themselves know how to record and read quipus, making them dependent on local professionals. The continent’s new rulers realised that this placed them in a tenuous position – the native quipu experts could easily mislead and cheat their overlords. So once Spain’s dominion was more firmly established, quipus were phased out and the new empire’s records were kept entirely in Latin script and numerals. Very few quipus survived the Spanish occupation, and most of those remaining are undecipherable, since, unfortunately, the art of reading quipus has been lost.
The Mesopotamians eventually started to want to write down things other than monotonous mathematical data. Between 3000 BC and 2500 BC more and more signs were added to the Sumerian system, gradually transforming it into a full script that we today call cuneiform. By 2500 BC, kings were using cuneiform to issue decrees, priests were using it to record oracles, and less exalted citizens were using it to write personal letters. At roughly the same time, Egyptians developed another full script known as hieroglyphics. Other full scripts were developed in China around 1200 BC and in Central America around 1000–500 BC.
From these initial centres, full scripts spread far and wide, taking on various new forms and novel tasks. People began to write poetry, history books, romances, dramas, prophecies and cookbooks. Yet writing’s most important task continued to be the storage of reams of mathematical data, and that task remained the prerogative of partial script. The Hebrew Bible, the Greek Iliad, the Hindu Mahabharata and the Buddhist Tipitika all began as oral works. For many generations they were transmitted orally and would have lived on even had writing never been invented. But tax registries and complex bureaucracies were born together with partial script, and the two remain inexorably linked to this day like Siamese twins – think of the cryptic entries in computerised data bases and spreadsheets.
As more and more things were written, and particularly as administrative archives grew to huge proportions, new problems appeared. Individuals can easily retrieve information stored in their own minds. My brain stores billions of bits of data, yet I can quickly, almost instantaneously, recall the name of Italy’s capital, immediately afterwards recollect what I did on 11 September 2001, and then reconstruct the route leading from my house to the Hebrew University in Jerusalem. Exactly how the brain does it remains a mystery, but we all know that the brain’s retrieval system is amazingly efficient, except when you are trying to remember where you put your car keys.
How, though, do you find and retrieve information stored on quipu cords or clay tablets? If you have just ten tablets or a hundred tablets, it’s not a problem. But what if you have accumulated thousands of them, as did one of Hammurabi’s contemporaries, King Zimrilim of Mari?
Imagine for a moment that it’s 1776 BC. Two Marians are quarrelling over possession of a wheat field. Jacob insists that he bought the field from Esau thirty years ago. Esau retorts that he in fact rented the field to Jacob for a term of thirty years, and that now, the term being up, he intends to reclaim it. They shout and wrangle and start pushing one another before they realise that they can resolve their dispute by going to the royal archive, where are housed the deeds and bills of sale that apply to all the kingdom’s real estate. Upon arriving at the archive they are shuttled from one official to the other. They wait through several herbal tea breaks, are told to come back tomorrow, and eventually are taken by a grumbling clerk to look for the relevant clay tablet. The clerk opens a door and leads them into a huge room lined, floor to ceiling, with thousands of clay tablets. No wonder the clerk is sour-faced. How is he supposed to locate the deed to the disputed wheat field written thirty years ago? Even if he finds it, how will he be able to cross-check to ensure that the one from thirty years ago is the latest document relating to the field in question? If he can’t find it, does that prove that Esau never sold or rented out the field? Or just that the document got lost, or turned to mush when some rain leaked into the archive?
Clearly, just imprinting a document in clay is not enough to guarantee efficient, accurate and convenient data processing. That requires methods of organisation like catalogues, methods of reproduction like photocopy machines, methods of rapid and accurate retrieval like computer algorithms, and pedantic (but hopefully cheerful) librarians who know how to use these tools.
Inventing such methods proved to be far more difficult than inventing writing. Many writing systems developed independently in cultures distant in time and place from each other. Every decade archaeologists discover another few forgotten scripts. Some of them might prove to be even older than the Sumerian scratches in clay. But most of them remain curiosities because those who invented them failed to invent efficient ways of cataloguing and retrieving data. What set apart Sumer, as well as pharaonic Egypt, ancient China and the Inca Empire, is that these cultures developed good techniques of archiving, cataloguing and retrieving written records. They obviously had no computers or photocopying machines, but they did have catalogues, and far more importantly, they did create special schools in which professional scribes, clerks, librarians and accountants were rigorously trained in the secrets of data-processing.
A writing exercise from a school in ancient Mesopotamia discovered by modern archaeologists gives us a glimpse into the lives of these students, some 4,000 years ago:
I went in and sat down, and my teacher read my tablet. He said, ‘There’s something missing!’
And he caned me.
One of the people in charge said, ‘Why did you open your mouth without my permission?’
And he caned me.
The one in charge of rules said, ‘Why did you get up without my permission?’
And he caned me.
The gatekeeper said, ‘Why are you going out without my permission?’
And he caned me.
The keeper of the beer jug said, ‘Why did you get some without my permission?’
And he caned me.
The Sumerian teacher said, ‘Why did you speak Akkadian?’*
And he caned me.
My teacher said, ‘Your handwriting is no good!’
And he caned me.4
Ancient scribes learned not merely to read and write, but also to use catalogues, dictionaries, calendars, forms and tables. They studied and internalised techniques of cataloguing, retrieving and processing information very different from those used by the brain. In the brain, all data is freely associated. When I go with my spouse to sign on a mortgage for our new home, I am reminded of the first place we lived together, which reminds me of our honeymoon in New Orleans, which reminds me of alligators, which remind me of dragons, which remind me of The Ring of the Nibelungen, and suddenly, before I know it, there I am humming the Siegfried leitmotif to a puzzled bank clerk. In bureaucracy, things must be kept apart. There is one drawer for home mortgages, another for marriage certificates, a third for tax registers, and a fourth for lawsuits. Otherwise, how can you find anything? Things that belong in more than one drawer, like Wagnerian music dramas (do I file them under ‘music’, ‘theatre’, or perhaps invent a new category altogether?), are a terrible headache. So one is forever adding, deleting and rearranging drawers.
In order to function, the people who operate such a system of drawers must be reprogrammed to stop thinking as humans and to start thinking as clerks and accountants. As everyone from ancient times till today knows, clerks and accountants think in a non-human fashion. They think like filing cabinets. This is not their fault. If they don’t think that way their drawers will all get mixed up and they won’t be able to provide the services their government, company or organisation requires. The most important impact of script on human history is precisely this: it has gradually changed the way humans think and view the world. Free association and holistic thought have given way to compartmentalisation and bureaucracy.
As the centuries passed, bureaucratic methods of data processing grew ever more different from the way humans naturally think – and ever more important. A critical step was made sometime before the ninth century AD, when a new partial script was invented, one that could store and process mathematical data with unprecedented efficiency. This partial script was composed of ten signs, representing the numbers from 0 to 9. Confusingly, these signs are known as Arabic numerals even though they were first invented by the Hindus (even more confusingly, modern Arabs use a set of digits that look quite different from Western ones). But the Arabs get the credit because when they invaded India they encountered the system, understood its usefulness, refined it, and spread it through the Middle East and then to Europe. When several other signs were later added to the Arab numerals (such as the signs for addition, subtraction and multiplication), the basis of modern mathematical notation came into being.
Although this system of writing remains a partial script, it has become the world’s dominant language. Almost all states, companies, organisations and institutions – whether they speak Arabic, Hindi, English or Norwegian – use mathematical script to record and process data. Every piece of information that can be translated into mathematical script is stored, spread and processed with mind-boggling speed and efficiency.
An equation for calculating the acceleration of mass i under the influence of gravity, according to the Theory of Relativity. When most laypeople see such an equation, they usually panic and freeze, like a deer caught in the headlights of a speeding vehicle. The reaction is quite natural, and does not betray a lack of intelligence or curiosity. With rare exceptions, human brains are simply incapable of thinking through concepts like relativity and quantum mechanics. Physicists nevertheless manage to do so, because they set aside the traditional human way of thinking, and learn to think anew with the help of external data-processing systems. Crucial parts of their thought process take place not in the head, but inside computers or on classroom blackboards.
A person who wishes to influence the decisions of governments, organisations and companies must therefore learn to speak in numbers. Experts do their best to translate even ideas such as ‘poverty’, ‘happiness’ and ‘honesty’ into numbers (‘the poverty line’, ‘subjective well-being levels’, ‘credit rating’). Entire fields of knowledge, such as physics and engineering, have already lost almost all touch with the spoken human language, and are maintained solely by mathematical script.
More recently, mathematical script has given rise to an even more revolutionary writing system, a computerised binary script consisting of only two signs: 0 and 1. The words I am now typing on my keyboard are written within my computer by different combinations of 0 and 1.
Writing was born as the maidservant of human consciousness, but is increasingly becoming its master. Our computers have trouble understanding how Homo sapiens talks, feels and dreams. So we are teaching Homo sapiens to talk, feel and dream in the language of numbers, which can be understood by computers.
And this is not the end of the story. The field of artificial intelligence is seeking to create a new kind of intelligence based solely on the binary script of computers. Science-fiction movies such as The Matrix and The Terminator tell of a day when the binary script throws off the yoke of humanity. When humans try to regain control of the rebellious script, it responds by attempting to wipe out the human race.
UNDERSTANDING HUMAN HISTORY IN THE millennia following the Agricultural Revolution boils down to a single question: how did humans organise themselves in mass-cooperation networks, when they lacked the biological instincts necessary to sustain such networks? The short answer is that humans created imagined orders and devised scripts. These two inventions filled the gaps left by our biological inheritance.
However, the appearance of these networks was, for many, a dubious blessing. The imagined orders sustaining these networks were neither neutral nor fair. They divided people into make-believe groups, arranged in a hierarchy. The upper levels enjoyed privileges and power, while the lower ones suffered from discrimination and oppression. Hammurabi’s Code, for example, established a pecking order of superiors, commoners and slaves. Superiors got all the good things in life. Commoners got what was left. Slaves got a beating if they complained.
Despite its proclamation of the equality of all men, the imagined order established by the Americans in 1776 also established a hierarchy. It created a hierarchy between men, who benefited from it, and women, whom it left disempowered. It created a hierarchy between whites, who enjoyed liberty, and blacks and American Indians, who were considered humans of a lesser type and therefore did not share in the equal rights of men. Many of those who signed the Declaration of Independence were slaveholders. They did not release their slaves upon signing the Declaration, nor did they consider themselves hypocrites. In their view, the rights of men had little to do with Negroes.
The American order also consecrated the hierarchy between rich and poor. Most Americans at that time had little problem with the inequality caused by wealthy parents passing their money and businesses on to their children. In their view, equality meant simply that the same laws applied to rich and poor. It had nothing to do with unemployment benefits, integrated education or health insurance. Liberty, too, carried very different connotations than it does today. In 1776, it did not mean that the disempowered (certainly not blacks or Indians or, God forbid, women) could gain and exercise power. It meant simply that the state could not, except in unusual circumstances, confiscate a citizen’s private property or tell him what to do with it. The American order thereby upheld the hierarchy of wealth, which some thought was mandated by God and others viewed as representing the immutable laws of nature. Nature, it was claimed, rewarded merit with wealth while penalising indolence.
All the above-mentioned distinctions – between free persons and slaves, between whites and blacks, between rich and poor – are rooted in fictions. (The hierarchy of men and women will be discussed later.) Yet it is an iron rule of history that every imagined hierarchy disavows its fictional origins and claims to be natural and inevitable. For instance, many people who have viewed the hierarchy of free persons and slaves as natural and correct have argued that slavery is not a human invention. Hammurabi saw it as ordained by the gods. Aristotle argued that slaves have a ‘slavish nature’ whereas free people have a ‘free nature’. Their status in society is merely a reflection of their innate nature.
Ask white supremacists about the racial hierarchy, and you are in for a pseudoscientific lecture concerning the biological differences between the races. You are likely to be told that there is something in Caucasian blood or genes that makes whites naturally more intelligent, moral and hardworking. Ask a diehard capitalist about the hierarchy of wealth, and you are likely to hear that it is the inevitable outcome of objective differences in abilities. The rich have more money, in this view, because they are more capable and diligent. No one should be bothered, then, if the wealthy get better health care, better education and better nutrition. The rich richly deserve every perk they enjoy.
21. A sign on a South African beach from the period of apartheid, restricting its usage to ‘whites’ only. People with lighter skin colour are typically more in danger of sunburn than people with darker skin. Yet there was no biological logic behind the division of South African beaches. Beaches reserved for people with lighter skin were not characterised by lower levels of ultraviolet radiation.
{Photo: Guy Tillim/Africa Media Online, 1989 © africanpictures/akg.}
Hindus who adhere to the caste system believe that cosmic forces have made one caste superior to another. According to a famous Hindu creation myth, the gods fashioned the world out of the body of a primeval being, the Purusa. The sun was created from the Purusa’s eye, the moon from the Purusa’s brain, the Brahmins (priests) from its mouth, the Kshatriyas (warriors) from its arms, the Vaishyas (peasants and merchants) from its thighs, and the Shudras (servants) from its legs. Accept this explanation and the sociopolitical differences between Brahmins and Shudras are as natural and eternal as the differences between the sun and the moon.1 The ancient Chinese believed that when the goddess Nü Wa created humans from earth, she kneaded aristocrats from fine yellow soil, whereas commoners were formed from brown mud.2
Yet, to the best of our understanding, these hierarchies are all the product of human imagination. Brahmins and Shudras were not really created by the gods from different body parts of a primeval being. Instead, the distinction between the two castes was created by laws and norms invented by humans in northern India about 3,000 years ago. Contrary to Aristotle, there is no known biological difference between slaves and free people. Human laws and norms have turned some people into slaves and others into masters. Between blacks and whites there are some objective biological differences, such as skin colour and hair type, but there is no evidence that the differences extend to intelligence or morality.
Most people claim that their social hierarchy is natural and just, while those of other societies are based on false and ridiculous criteria. Modern Westerners are taught to scoff at the idea of racial hierarchy. They are shocked by laws prohibiting blacks to live in white neighbourhoods, or to study in white schools, or to be treated in white hospitals. But the hierarchy of rich and poor – which mandates that rich people live in separate and more luxurious neighbourhoods, study in separate and more prestigious schools, and receive medical treatment in separate and better-equipped facilities – seems perfectly sensible to many Americans and Europeans. Yet it’s a proven fact that most rich people are rich for the simple reason that they were born into a rich family, while most poor people will remain poor throughout their lives simply because they were born into a poor family.
Unfortunately, complex human societies seem to require imagined hierarchies and unjust discrimination. Of course not all hierarchies are morally identical, and some societies suffered from more extreme types of discrimination than others, yet scholars know of no large society that has been able to dispense with discrimination altogether. Time and again people have created order in their societies by classifying the population into imagined categories, such as superiors, commoners and slaves; whites and blacks; patricians and plebeians; Brahmins and Shudras; or rich and poor. These categories have regulated relations between millions of humans by making some people legally, politically or socially superior to others.
Hierarchies serve an important function. They enable complete strangers to know how to treat one another without wasting the time and energy needed to become personally acquainted. In George Bernard Shaw’s Pygmalion, Henry Higgins doesn’t need to establish an intimate acquaintance with Eliza Doolittle in order to understand how he should relate to her. Just hearing her talk tells him that she is a member of the underclass with whom he can do as he wishes – for example, using her as a pawn in his bet to pass off a flower girl as a duchess. A modern Eliza working at a florist’s needs to know how much effort to put into selling roses and gladioli to the dozens of people who enter the shop each day. She can’t make a detailed enquiry into the tastes and wallets of each individual. Instead, she uses social cues – the way the person is dressed, his or her age, and if she’s not politically correct his skin colour. That is how she immediately distinguishes between the accounting-firm partner who’s likely to place a large order for expensive roses, and a messenger boy who can only afford a bunch of daisies.
Of course, differences in natural abilities also play a role in the formation of social distinctions. But such diversities of aptitudes and character are usually mediated through imagined hierarchies. This happens in two important ways. First and foremost, most abilities have to be nurtured and developed. Even if somebody is born with a particular talent, that talent will usually remain latent if it is not fostered, honed and exercised. Not all people get the same chance to cultivate and refine their abilities. Whether or not they have such an opportunity will usually depend on their place within their society’s imagined hierarchy. Harry Potter is a good example. Removed from his distinguished wizard family and brought up by ignorant muggles, he arrives at Hogwarts without any experience in magic. It takes him seven books to gain a firm command of his powers and knowledge of his unique abilities.
Second, even if people belonging to different classes develop exactly the same abilities, they are unlikely to enjoy equal success because they will have to play the game by different rules. If, in British-ruled India, an Untouchable, a Brahmin, a Catholic Irishman and a Protestant Englishman had somehow developed exactly the same business acumen, they still would not have had the same chance of becoming rich. The economic game was rigged by legal restrictions and unofficial glass ceilings.
All societies are based on imagined hierarchies, but not necessarily on the same hierarchies. What accounts for the differences? Why did traditional Indian society classify people according to caste, Ottoman society according to religion, and American society according to race? In most cases the hierarchy originated as the result of a set of accidental historical circumstances and was then perpetuated and refined over many generations as different groups developed vested interests in it.
For instance, many scholars surmise that the Hindu caste system took shape when Indo-Aryan people invaded the Indian subcontinent about 3,000 years ago, subjugating the local population. The invaders established a stratified society, in which they – of course – occupied the leading positions (priests and warriors), leaving the natives to live as servants and slaves. The invaders, who were few in number, feared losing their privileged status and unique identity. To forestall this danger, they divided the population into castes, each of which was required to pursue a specific occupation or perform a specific role in society. Each had different legal status, privileges and duties. Mixing of castes – social interaction, marriage, even the sharing of meals – was prohibited. And the distinctions were not just legal – they became an inherent part of religious mythology and practice.
The rulers argued that the caste system reflected an eternal cosmic reality rather than a chance historical development. Concepts of purity and impurity were essential elements in Hindu religion, and they were harnessed to buttress the social pyramid. Pious Hindus were taught that contact with members of a different caste could pollute not only them personally, but society as a whole, and should therefore be abhorred. Such ideas are hardly unique to Hindus. Throughout history, and in almost all societies, concepts of pollution and purity have played a leading role in enforcing social and political divisions and have been exploited by numerous ruling classes to maintain their privileges. The fear of pollution is not a complete fabrication of priests and princes, however. It probably has its roots in biological survival mechanisms that make humans feel an instinctive revulsion towards potential disease carriers, such as sick persons and dead bodies. If you want to keep any human group isolated – women, Jews, Roma, gays, blacks – the best way to do it is convince everyone that these people are a source of pollution.
The Hindu caste system and its attendant laws of purity became deeply embedded in Indian culture. Long after the Indo-Aryan invasion was forgotten, Indians continued to believe in the caste system and to abhor the pollution caused by caste mixing. Castes were not immune to change. In fact, as time went by, large castes were divided into sub-castes. Eventually the original four castes turned into 3,000 different groupings called jati (literally ‘birth’). But this proliferation of castes did not change the basic principle of the system, according to which every person is born into a particular rank, and any infringement of its rules pollutes the person and society as a whole. A person’s jati determines her profession, the food she can eat, her place of residence and her eligible marriage partners. Usually a person can marry only within his or her caste, and the resulting children inherit that status.
Whenever a new profession developed or a new group of people appeared on the scene, they had to be recognised as a caste in order to receive a legitimate place within Hindu society. Groups that failed to win recognition as a caste were, literally, outcasts – in this stratified society, they did not even occupy the lowest rung. They became known as Untouchables. They had to live apart from all other people and scrape together a living in humiliating and disgusting ways, such as sifting through garbage dumps for scrap material. Even members of the lowest caste avoided mingling with them, eating with them, touching them and certainly marrying them. In modern India, matters of marriage and work are still heavily influenced by the caste system, despite all attempts by the democratic government of India to break down such distinctions and convince Hindus that there is nothing polluting in caste mixing.3
A similar vicious circle perpetuated the racial hierarchy in modern America. From the sixteenth to the eighteenth century, the European conquerors imported millions of African slaves to work the mines and plantations of America. They chose to import slaves from Africa rather than from Europe or East Asia due to three circumstantial factors. Firstly, Africa was closer, so it was cheaper to import slaves from Senegal than from Vietnam.
Secondly, in Africa there already existed a well-developed slave trade (exporting slaves mainly to the Middle East), whereas in Europe slavery was very rare. It was obviously far easier to buy slaves in an existing market than to create a new one from scratch.
Thirdly, and most importantly, American plantations in places such as Virginia, Haiti and Brazil were plagued by malaria and yellow fever, which had originated in Africa. Africans had acquired over the generations a partial genetic immunity to these diseases, whereas Europeans were totally defenceless and died in droves. It was consequently wiser for a plantation owner to invest his money in an African slave than in a European slave or indentured labourer. Paradoxically, genetic superiority (in terms of immunity) translated into social inferiority: precisely because Africans were fitter in tropical climates than Europeans, they ended up as the slaves of European masters! Due to these circumstantial factors, the burgeoning new societies of America were to be divided into a ruling caste of white Europeans and a subjugated caste of black Africans.
But people don’t like to say that they keep slaves of a certain race or origin simply because it’s economically expedient. Like the Aryan conquerors of India, white Europeans in the Americas wanted to be seen not only as economically successful but also as pious, just and objective. Religious and scientific myths were pressed into service to justify this division. Theologians argued that Africans descend from Ham, son of Noah, saddled by his father with a curse that his offspring would be slaves. Biologists argued that blacks are less intelligent than whites and their moral sense less developed. Doctors alleged that blacks live in filth and spread diseases – in other words, they are a source of pollution.
These myths struck a chord in American culture, and in Western culture generally. They continued to exert their influence long after the conditions that created slavery had disappeared. In the early nineteenth century imperial Britain outlawed slavery and stopped the Atlantic slave trade, and in the decades that followed slavery was gradually outlawed throughout the American continent. Notably, this was the first and only time in history that a large number of slaveholding societies voluntarily abolished slavery. But, even though the slaves were freed, the racist myths that justified slavery persisted. Separation of the races was maintained by racist legislation and social custom.
The result was a self-reinforcing cycle of cause and effect, a vicious circle. Consider, for example, the southern United States immediately after the Civil War. In 1865 the Thirteenth Amendment to the US Constitution outlawed slavery and the Fourteenth Amendment mandated that citizenship and the equal protection of the law could not be denied on the basis of race. However, two centuries of slavery meant that most black families were far poorer and far less educated than most white families. A black person born in Alabama in 1865 thus had much less chance of getting a good education and a well-paid job than did his white neighbours. His children, born in the 1880s and 1890s, started life with the same disadvantage – they, too, were born to an uneducated, poor family.
But economic disadvantage was not the whole story. Alabama was also home to many poor whites who lacked the opportunities available to their better-off racial brothers and sisters. In addition, the Industrial Revolution and the waves of immigration made the United States an extremely fluid society, where rags could quickly turn into riches. If money was all that mattered, the sharp divide between the races should soon have blurred, not least through intermarriage.
But that did not happen. By 1865 whites, as well as many blacks, took it to be a simple matter of fact that blacks were less intelligent, more violent and sexually dissolute, lazier and less concerned about personal cleanliness than whites. They were thus the agents of violence, theft, rape and disease – in other words, pollution. If a black Alabaman in 1895 miraculously managed to get a good education and then applied for a respectable job such as a bank teller, his odds of being accepted were far worse than those of an equally qualified white candidate. The stigma that labelled blacks as, by nature, unreliable, lazy and less intelligent conspired against him.
You might think that people would gradually understand that these stigmas were myth rather than fact and that blacks would be able, over time, to prove themselves just as competent, law-abiding and clean as whites. In fact, the opposite happened – these prejudices became more and more entrenched as time went by. Since all the best jobs were held by whites, it became easier to believe that blacks really are inferior. ‘Look,’ said the average white citizen, ‘blacks have been free for generations, yet there are almost no black professors, lawyers, doctors or even bank tellers. Isn’t that proof that blacks are simply less intelligent and hard-working?’ Trapped in this vicious circle, blacks were not hired for white-collar jobs because they were deemed unintelligent, and the proof of their inferiority was the paucity of blacks in white-collar jobs.
The vicious circle did not stop there. As anti-black stigmas grew stronger, they were translated into a system of ‘Jim Crow’ laws and norms that were meant to safeguard the racial order in the South. Blacks were forbidden to vote in elections, to study in white schools, to buy in white stores, to eat in white restaurants, to sleep in white hotels. The justification for all of this was that blacks were foul, slothful and vicious, so whites had to be protected from them. Whites did not want to sleep in the same hotel as blacks or to eat in the same restaurant, for fear of diseases. They did not want their children learning in the same school as black children, for fear of brutality and bad influences. They did not want blacks voting in elections, since blacks were ignorant and immoral. These fears were substantiated by scientific studies that ‘proved’ that blacks were indeed less educated, that various diseases were more common among them, and that their crime rate was far higher (the studies ignored the fact that these ‘facts’ resulted from discrimination against blacks).
By the mid-twentieth century, segregation in the former Confederate states was probably worse than in the late nineteenth century. Clennon King, a black student who applied to the University of Mississippi in 1958, was forcefully committed to a mental asylum. The presiding judge ruled that a black person must surely be insane to think that he could be admitted to the University of Mississippi.
The vicious circle: a chance historical situation is translated into a rigid social system.
Nothing was as revolting to American southerners (and many northerners) as sexual relations and marriage between black men and white women. Sex between the races became the greatest taboo and any violation, or suspected violation, was viewed as deserving immediate and summary punishment in the form of lynching. The Ku Klux Klan, a white supremacist secret society, perpetrated many such killings. They could have taught the Hindu Brahmins a thing or two about purity laws.
With time, the racism spread to more and more cultural arenas. American aesthetic culture was built around white standards of beauty. The physical attributes of the white race – for example light skin, fair and straight hair, a small upturned nose – came to be identified as beautiful. Typical black features – dark skin, dark and bushy hair, a flattened nose – were deemed ugly. These preconceptions ingrained the imagined hierarchy at an even deeper level of human consciousness.
Such vicious circles can go on for centuries and even millennia, perpetuating an imagined hierarchy that sprang from a chance historical occurrence. Unjust discrimination often gets worse, not better, with time. Money comes to money, and poverty to poverty. Education comes to education, and ignorance to ignorance. Those once victimised by history are likely to be victimised yet again. And those whom history has privileged are more likely to be privileged again.
Most sociopolitical hierarchies lack a logical or biological basis – they are nothing but the perpetuation of chance events supported by myths. That is one good reason to study history. If the division into blacks and whites or Brahmins and Shudras was grounded in biological realities – that is, if Brahmins really had better brains than Shudras – biology would be sufficient for understanding human society. Since the biological distinctions between different groups of Homo sapiens are, in fact, negligible, biology can’t explain the intricacies of Indian society or American racial dynamics. We can only understand those phenomena by studying the events, circumstances, and power relations that transformed figments of imagination into cruel – and very real – social structures.
Different societies adopt different kinds of imagined hierarchies. Race is very important to modern Americans but was relatively insignificant to medieval Muslims. Caste was a matter of life and death in medieval India, whereas in modern Europe it is practically non-existent. One hierarchy, however, has been of supreme importance in all known human societies: the hierarchy of gender. People everywhere have divided themselves into men and women. And almost everywhere men have got the better deal, at least since the Agricultural Revolution.
Some of the earliest Chinese texts are oracle bones, dating to 1200 BC, used to divine the future. On one was engraved the question: ‘Will Lady Hao’s childbearing be lucky?’ To which was written the reply: ‘If the child is born on a ding day, lucky; if on a geng day, vastly auspicious.’ However, Lady Hao was to give birth on a jiayin day. The text ends with the morose observation: ‘Three weeks and one day later, on jiayin day, the child was born. Not lucky. It was a girl.’4 More than 3,000 years later, when Communist China enacted the ‘one child’ policy, many Chinese families continued to regard the birth of a girl as a misfortune. Parents would occasionally abandon or murder newborn baby girls in order to have another shot at getting a boy.
In many societies women were simply the property of men, most often their fathers, husbands or brothers. Rape, in many legal systems, falls under property violation – in other words, the victim is not the woman who was raped but the male who owns her. This being the case, the legal remedy was the transfer of ownership – the rapist was required to pay a bride price to the woman’s father or brother, upon which she became the rapist’s property. The Bible decrees that ‘If a man meets a virgin who is not betrothed, and seizes her and lies with her, and they are found, then the man who lay with her shall give to the father of the young woman fifty shekels of silver, and she shall be his wife’ (Deuteronomy 22:28–9). The ancient Hebrews considered this a reasonable arrangement.
Raping a woman who did not belong to any man was not considered a crime at all, just as picking up a lost coin on a busy street is not considered theft. And if a husband raped his own wife, he had committed no crime. In fact, the idea that a husband could rape his wife was an oxymoron. To be a husband was to have full control of your wife’s sexuality. To say that a husband ‘raped’ his wife was as illogical as saying that a man stole his own wallet. Such thinking was not confined to the ancient Middle East. As of 2006, there were still fifty-three countries where a husband could not be prosecuted for the rape of his wife. Even in Germany, rape laws were amended only in 1997 to create a legal category of marital rape.5
Is the division into men and women a product of the imagination, like the caste system in India and the racial system in America, or is it a natural division with deep biological roots? And if it is indeed a natural division, are there also biological explanations for the preference given to men over women?
Some of the cultural, legal and political disparities between men and women reflect the obvious biological differences between the sexes. Childbearing has always been women’s job, because men don’t have wombs. Yet around this hard universal kernel, every society accumulated layer upon layer of cultural ideas and norms that have little to do with biology. Societies associate a host of attributes with masculinity and femininity that, for the most part, lack a firm biological basis.
For instance, in democratic Athens of the fifth century BC, an individual possessing a womb had no independent legal status and was forbidden to participate in popular assemblies or to be a judge. With few exceptions, such an individual could not benefit from a good education, nor engage in business or in philosophical discourse. None of Athens’ political leaders, none of its great philosophers, orators, artists or merchants had a womb. Does having a womb make a person unfit, biologically, for these professions? The ancient Athenians thought so. Modern Athenians disagree. In present-day Athens, women vote, are elected to public office, make speeches, design everything from jewellery to buildings to software, and go to university. Their wombs do not keep them from doing any of these things as successfully as men do. True, they are still under-represented in politics and business – only about 12 per cent of the members of Greece’s parliament are women. But there is no legal barrier to their participation in politics, and most modern Greeks think it is quite normal for a woman to serve in public office.
Many modern Greeks also think that an integral part of being a man is being sexually attracted to women only, and having sexual relations exclusively with the opposite sex. They don’t see this as a cultural bias, but rather as a biological reality – relations between two people of the opposite sex are natural, and between two people of the same sex unnatural. In fact, though, Mother Nature does not mind if men are sexually attracted to one another. It’s only human mothers and fathers steeped in particular cultures who make a scene if their son has a fling with the boy next door. The mother’s tantrums are not a biological imperative. A significant number of human cultures have viewed homosexual relations as not only legitimate but even socially constructive, ancient Greece being the most notable example. The Iliad does not mention that Thetis had any objection to her son Achilles’ relations with Patroclus. Queen Olympias of Macedon was one of the most temperamental and forceful women of the ancient world, and even had her own husband, King Philip, assassinated. Yet she didn’t have a fit when her son, Alexander the Great, brought his lover Hephaestion home for dinner.
How can we distinguish what is biologically determined from what people merely try to justify through biological myths? A good rule of thumb is ‘Biology enables, Culture forbids.’ Biology is willing to tolerate a very wide spectrum of possibilities. It’s culture that obliges people to realise some possibilities while forbidding others. Biology enables women to have children – some cultures oblige women to realise this possibility. Biology enables men to enjoy sex with one another – some cultures forbid them to realise this possibility.
Culture tends to argue that it forbids only that which is unnatural. But from a biological perspective, nothing is unnatural. Whatever is possible is by definition also natural. A truly unnatural behaviour, one that goes against the laws of nature, simply cannot exist, so it would need no prohibition. No culture has ever bothered to forbid men to photosynthesise, women to run faster than the speed of light, or negatively charged electrons to be attracted to each other.
In truth, our concepts ‘natural’ and ‘unnatural’ are taken not from biology, but from Christian theology. The theological meaning of ‘natural’ is ‘in accordance with the intentions of the God who created nature’. Christian theologians argued that God created the human body, intending each limb and organ to serve a particular purpose. If we use our limbs and organs for the purpose envisioned by God, then it is a natural activity. To use them differently than God intends is unnatural. But evolution has no purpose. Organs have not evolved with a purpose, and the way they are used is in constant flux. There is not a single organ in the human body that only does the job its prototype did when it first appeared hundreds of millions of years ago. Organs evolve to perform a particular function, but once they exist, they can be adapted for other usages as well. Mouths, for example, appeared because the earliest multicellular organisms needed a way to take nutrients into their bodies. We still use our mouths for that purpose, but we also use them to kiss, speak and, if we are Rambo, to pull the pins out of hand grenades. Are any of these uses unnatural simply because our worm-like ancestors 600 million years ago didn’t do those things with their mouths?
Similarly, wings didn’t suddenly appear in all their aerodynamic glory. They developed from organs that served another purpose. According to one theory, insect wings evolved millions of years ago from body protrusions on flightless bugs. Bugs with bumps had a larger surface area than those without bumps, and this enabled them to absorb more sunlight and thus stay warmer. In a slow evolutionary process, these solar heaters grew larger. The same structure that was good for maximum sunlight absorption – lots of surface area, little weight – also, by coincidence, gave the insects a bit of a lift when they skipped and jumped. Those with bigger protrusions could skip and jump farther. Some insects started using the things to glide, and from there it was a small step to wings that could actually propel the bug through the air. Next time a mosquito buzzes in your ear, accuse her of unnatural behaviour. If she were well behaved and content with what God gave her, she’d use her wings only as solar panels.
The same sort of multitasking applies to our sexual organs and behaviour. Sex first evolved for procreation and courtship rituals as a way of sizing up the fitness of a potential mate. But many animals now put both to use for a multitude of social purposes that have little to do with creating little copies of themselves. Chimpanzees, for example, use sex to cement political alliances, establish intimacy and defuse tensions. Is that unnatural?
There is little sense, then, in arguing that the natural function of women is to give birth, or that homosexuality is unnatural. Most of the laws, norms, rights and obligations that define manhood and womanhood reflect human imagination more than biological reality.
Biologically, humans are divided into males and females. A male Homo sapiens is one who has one X chromosome and one Y chromosome; a female is one with two Xs. But ‘man’ and ‘woman’ name social, not biological, categories. While in the great majority of cases in most human societies men are males and women are females, the social terms carry a lot of baggage that has only a tenuous, if any, relationship to the biological terms. A man is not a Sapiens with particular biological qualities such as XY chromosomes, testicles and lots of testosterone. Rather, he fits into a particular slot in his society’s imagined human order. His culture’s myths assign him particular masculine roles (like engaging in politics), rights (like voting) and duties (like military service). Likewise, a woman is not a Sapiens with two X chromosomes, a womb and plenty of oestrogen. Rather, she is a female member of an imagined human order. The myths of her society assign her unique feminine roles (raising children), rights (protection against violence) and duties (obedience to her husband). Since myths, rather than biology, define the roles, rights and duties of men and women, the meaning of ‘manhood’ and ‘womanhood’ have varied immensely from one society to another.
22. Eighteenth-century masculinity: an official portrait of King Louis XIV of France. Note the long wig, stockings, high-heeled shoes, dancer’s posture – and huge sword. In contemporary Europe, all these (except for the sword) would be considered marks of effeminacy. But in his time Louis was a European paragon of manhood and virility.
{© Réunion des musées nationaux/Gérard Blot.}
23. Twenty-first-century masculinity: an official portrait of Barack Obama. What happened to the wig, stockings, high heels – and sword? Dominant men have never looked so dull and dreary as they do today. During most of history, dominant men have been colourful and flamboyant, such as American Indian chiefs with their feathered headdresses and Hindu maharajas decked out in silks and diamonds. Throughout the animal kingdom males tend to be more colourful and accessorised than females – think of peacocks’ tails and lions’ manes.
{© Visual/Corbis.}
To make things less confusing, scholars usually distinguish between ‘sex’, which is a biological category, and ‘gender’, a cultural category. Sex is divided between males and females, and the qualities of this division are objective and have remained constant throughout history. Gender is divided between men and women (and some cultures recognise other categories). So-called ‘masculine’ and ‘feminine’ qualities are inter-subjective and undergo constant changes. For example, there are far-reaching differences in the behaviour, desires, dress and even body posture expected from women in classical Athens and women in modern Athens.6
Sex is child’s play; but gender is serious business. To get to be a member of the male sex is the simplest thing in the world. You just need to be born with an X and a Y chromosome. To get to be a female is equally simple. A pair of X chromosomes will do it. In contrast, becoming a man or a woman is a very complicated and demanding undertaking. Since most masculine and feminine qualities are cultural rather than biological, no society automatically crowns each male a man, or every female a woman. Nor are these titles laurels that can be rested on once they are acquired. Males must prove their masculinity constantly, throughout their lives, from cradle to grave, in an endless series of rites and performances. And a woman’s work is never done – she must continually convince herself and others that she is feminine enough.
Success is not guaranteed. Males in particular live in constant dread of losing their claim to manhood. Throughout history, males have been willing to risk and even sacrifice their lives, just so that people will say ‘He’s a real man!’
At least since the Agricultural Revolution, most human societies have been patriarchal societies that valued men more highly than women. No matter how a society defined ‘man’ and ‘woman’, to be a man was always better. Patriarchal societies educate men to think and act in a masculine way and women to think and act in a feminine way, punishing anyone who dares cross those boundaries. Yet they do not equally reward those who conform. Qualities considered masculine are more valued than those considered feminine, and members of a society who personify the feminine ideal get less than those who exemplify the masculine ideal. Fewer resources are invested in the health and education of women; they have fewer economic opportunities, less political power, and less freedom of movement. Gender is a race in which some of the runners compete only for the bronze medal.
True, a handful of women have made it to the alpha position, such as Cleopatra of Egypt, Empress Wu Zetian of China (c. AD 700) and Elizabeth I of England. Yet they are the exceptions that prove the rule. Throughout Elizabeth’s forty-five-year reign, all Members of Parliament were men, all officers in the Royal Navy and army were men, all judges and lawyers were men, all bishops and archbishops were men, all theologians and priests were men, all doctors and surgeons were men, all students and professors in all universities and colleges were men, all mayors and sheriffs were men, and almost all the writers, architects, poets, philosophers, painters, musicians and scientists were men.
Patriarchy has been the norm in almost all agricultural and industrial societies. It has tenaciously weathered political upheavals, social revolutions and economic transformations. Egypt, for example, was conquered numerous times over the centuries. Assyrians, Persians, Macedonians, Romans, Arabs, Mameluks, Turks and British occupied it – and its society always remained patriarchal. Egypt was governed by pharaonic law, Greek law, Roman law, Muslim law, Ottoman law and British law – and they all discriminated against people who were not ‘real men’.
Since patriarchy is so universal, it cannot be the product of some vicious circle that was kick-started by a chance occurrence. It is particularly noteworthy that even before 1492, most societies in both America and Afro-Asia were patriarchal, even though they had been out of contact for thousands of years. If patriarchy in Afro-Asia resulted from some chance occurrence, why were the Aztecs and Incas patriarchal? It is far more likely that even though the precise definition of ‘man’ and ‘woman’ varies between cultures, there is some universal biological reason why almost all cultures valued manhood over womanhood. We do not know what this reason is. There are plenty of theories, none of them convincing.
The most common theory points to the fact that men are stronger than women, and that they have used their greater physical power to force women into submission. A more subtle version of this claim argues that their strength allows men to monopolise tasks that demand hard manual labour, such as ploughing and harvesting. This gives them control of food production, which in turn translates into political clout.
There are two problems with this emphasis on muscle power. First, the statement that ‘men are stronger than women’ is true only on average, and only with regard to certain types of strength. Women are generally more resistant to hunger, disease and fatigue than men. There are also many women who can run faster and lift heavier weights than many men. Furthermore, and most problematically for this theory, women have, throughout history, been excluded mainly from jobs that require little physical effort (such as the priesthood, law and politics), while engaging in hard manual labour in the fields, in crafts and in the household. If social power were divided in direct relation to physical strength or stamina, women should have got far more of it.
Even more importantly, there simply is no direct relation between physical strength and social power among humans. People in their sixties usually exercise power over people in their twenties, even though twentysomethings are much stronger than their elders. The typical plantation owner in Alabama in the mid-nineteenth century could have been wrestled to the ground in seconds by any of the slaves cultivating his cotton fields. Boxing matches were not used to select Egyptian pharaohs or Catholic popes. In forager societies, political dominance generally resides with the person possessing the best social skills rather than the most developed musculature. In organised crime, the big boss is not necessarily the strongest man. He is often an older man who very rarely uses his own fists; he gets younger and fitter men to do the dirty jobs for him. A guy who thinks that the way to take over the syndicate is to beat up the don is unlikely to live long enough to learn from his mistake. Even among chimpanzees, the alpha male wins his position by building a stable coalition with other males and females, not through mindless violence.
In fact, human history shows that there is often an inverse relation between physical prowess and social power. In most societies, it’s the lower classes who do the manual labour. This may reflect Homo sapiens’ position in the food chain. If all that counted were raw physical abilities, Sapiens would have found themselves on a middle rung of the ladder. But their mental and social skills placed them at the top. It is therefore only natural that the chain of power within the species will also be determined by mental and social abilities more than by brute force. It is therefore hard to believe that the most influential and most stable social hierarchy in history is founded on men’s ability physically to coerce women.
Another theory explains that masculine dominance results not from strength but from aggression. Millions of years of evolution have made men far more violent than women. Women can match men as far as hatred, greed and abuse are concerned, but when push comes to shove, the theory goes, men are more willing to engage in raw physical violence. This is why throughout history warfare has been a masculine prerogative.
In times of war, men’s control of the armed forces has made them the masters of civilian society, too. They then used their control of civilian society to fight more and more wars, and the greater the number of wars, the greater men’s control of society. This feedback loop explains both the ubiquity of war and the ubiquity of patriarchy.
Recent studies of the hormonal and cognitive systems of men and women strengthen the assumption that men indeed have more aggressive and violent tendencies, and are therefore, on average, better suited to serve as common soldiers. Yet granted that the common soldiers are all men, does it follow that the ones managing the war and enjoying its fruits must also be men? That makes no sense. It’s like assuming that because all the slaves cultivating cotton fields are black, plantation owners will be black as well. Just as an all-black workforce might be controlled by an all-white management, why couldn’t an all-male soldiery be controlled by an all-female or at least partly female government? In fact, in numerous societies throughout history, the top officers did not work their way up from the rank of private. Aristocrats, the wealthy and the educated were automatically assigned officer rank and never served a day in the ranks.
When the Duke of Wellington, Napoleon’s nemesis, enlisted in the British army at the age of eighteen, he was immediately commissioned as an officer. He didn’t think much of the plebeians under his command. ‘We have in the service the scum of the earth as common soldiers,’ he wrote to a fellow aristocrat during the wars against France. These common soldiers were usually recruited from among the very poorest, or from ethnic minorities (such as the Irish Catholics). Their chances of ascending the military ranks were negligible. The senior ranks were reserved for dukes, princes and kings. But why only for dukes, and not for duchesses?
The French Empire in Africa was established and defended by the sweat and blood of Senegalese, Algerians and working-class Frenchmen. The percentage of well-born Frenchmen within the ranks was negligible. Yet the percentage of well-born Frenchmen within the small elite that led the French army, ruled the empire and enjoyed its fruits was very high. Why just Frenchmen, and not French women?
In China there was a long tradition of subjugating the army to the civilian bureaucracy, so mandarins who had never held a sword often ran the wars. ‘You do not waste good iron to make nails,’ went a common Chinese saying, meaning that really talented people join the civil bureaucracy, not the army. Why, then, were all of these mandarins men?
One can’t reasonably argue that their physical weakness or low testosterone levels prevented women from being successful mandarins, generals and politicians. In order to manage a war, you surely need stamina, but not much physical strength or aggressiveness. Wars are not a pub brawl. They are very complex projects that require an extraordinary degree of organisation, cooperation and appeasement. The ability to maintain peace at home, acquire allies abroad, and understand what goes through the minds of other people (particularly your enemies) is usually the key to victory. Hence an aggressive brute is often the worst choice to run a war. Much better is a cooperative person who knows how to appease, how to manipulate and how to see things from different perspectives. This is the stuff empire-builders are made of. The militarily incompetent Augustus succeeded in establishing a stable imperial regime, achieving something that eluded both Julius Caesar and Alexander the Great, who were much better generals. Both his admiring contemporaries and modern historians often attribute this feat to his virtue of clementia – mildness and clemency.
Women are often stereotyped as better manipulators and appeasers than men, and are famed for their superior ability to see things from the perspective of others. If there’s any truth in these stereotypes, then women should have made excellent politicians and empire-builders, leaving the dirty work on the battlefields to testosterone-charged but simple-minded machos. Popular myths notwithstanding, this rarely happened in the real world. It is not at all clear why not.
A third type of biological explanation gives less importance to brute force and violence, and suggests that through millions of years of evolution, men and women evolved different survival and reproduction strategies. As men competed against each other for the opportunity to impregnate fertile women, an individual’s chances of reproduction depended above all on his ability to outperform and defeat other men. As time went by, the masculine genes that made it to the next generation were those belonging to the most ambitious, aggressive and competitive men.
A woman, on the other hand, had no problem finding a man willing to impregnate her. However, if she wanted her children to provide her with grandchildren, she needed to carry them in her womb for nine arduous months, and then nurture them for years. During that time she had fewer opportunities to obtain food, and required a lot of help. She needed a man. In order to ensure her own survival and the survival of her children, the woman had little choice but to agree to whatever conditions the man stipulated so that he would stick around and share some of the burden. As time went by, the feminine genes that made it to the next generation belonged to women who were submissive caretakers. Women who spent too much time fighting for power did not leave any of those powerful genes for future generations.
The result of these different survival strategies – so the theory goes – is that men have been programmed to be ambitious and competitive, and to excel in politics and business, whereas women have tended to move out of the way and dedicate their lives to raising children.
But this approach also seems to be belied by the empirical evidence. Particularly problematic is the assumption that women’s dependence on external help made them dependent on men, rather than on other women, and that male competitiveness made men socially dominant. There are many species of animals, such as elephants and bonobo chimpanzees, in which the dynamics between dependent females and competitive males results in a matriarchal society. Since females need external help, they are obliged to develop their social skills and learn how to cooperate and appease. They construct all-female social networks that help each member raise her children. Males, meanwhile, spend their time fighting and competing. Their social skills and social bonds remain underdeveloped. Bonobo and elephant societies are controlled by strong networks of cooperative females, while the self-centred and uncooperative males are pushed to the sidelines. Though bonobo females are weaker on average than the males, the females often gang up to beat males who overstep their limits.
If this is possible among bonobos and elephants, why not among Homo sapiens? Sapiens are relatively weak animals, whose advantage rests in their ability to cooperate in large numbers. If so, we should expect that dependent women, even if they are dependent on men, would use their superior social skills to cooperate to outmanoeuvre and manipulate aggressive, autonomous and self-centred men.
How did it happen that in the one species whose success depends above all on cooperation, individuals who are supposedly less cooperative (men) control individuals who are supposedly more cooperative (women)? At present, we have no good answer. Maybe the common assumptions are just wrong. Maybe males of the species Homo sapiens are characterised not by physical strength, aggressiveness and competitiveness, but rather by superior social skills and a greater tendency to cooperate. We just don’t know.
What we do know, however, is that during the last century gender roles have undergone a tremendous revolution. More and more societies today not only give men and women equal legal status, political rights and economic opportunities, but also completely rethink their most basic conceptions of gender and sexuality. Though the gender gap is still significant, events have been moving at a breathtaking speed. At the beginning of the twentieth century the idea of giving voting rights to women was generally seen in the USA as outrageous; the prospect of a female cabinet secretary or Supreme Court justice was simply ridiculous; whereas homosexuality was such a taboo subject that it could not even be openly discussed. At the beginning of the twenty-first century women’s voting rights are taken for granted; female cabinet secretaries are hardly a cause for comment; and in 2013 five US Supreme Court justices, three of them women, decided in favour of legalising same-sex marriages (overruling the objections of four male justices).
These dramatic changes are precisely what makes the history of gender so bewildering. If, as is being demonstrated today so clearly, the patriarchal system has been based on unfounded myths rather than on biological facts, what accounts for the universality and stability of this system?
24. Pilgrims circling the Ka’aba in Mecca.
{© Visual/Corbis.}
AFTER THE AGRICULTURAL REVOLUTION, human societies grew ever larger and more complex, while the imagined constructs sustaining the social order also became more elaborate. Myths and fictions accustomed people, nearly from the moment of birth, to think in certain ways, to behave in accordance with certain standards, to want certain things, and to observe certain rules. They thereby created artificial instincts that enabled millions of strangers to cooperate effectively. This network of artificial instincts is called ‘culture’.
During the first half of the twentieth century, scholars taught that every culture was complete and harmonious, possessing an unchanging essence that defined it for all time. Each human group had its own world view and system of social, legal and political arrangements that ran as smoothly as the planets going around the sun. In this view, cultures left to their own devices did not change. They just kept going at the same pace and in the same direction. Only a force applied from outside could change them. Anthropologists, historians and politicians thus referred to ‘Samoan Culture’ or ‘Tasmanian Culture’ as if the same beliefs, norms and values had characterised Samoans and Tasmanians from time immemorial.
Today, most scholars of culture have concluded that the opposite is true. Every culture has its typical beliefs, norms and values, but these are in constant flux. The culture may transform itself in response to changes in its environment or through interaction with neighbouring cultures. But cultures also undergo transitions due to their own internal dynamics. Even a completely isolated culture existing in an ecologically stable environment cannot avoid change. Unlike the laws of physics, which are free of inconsistencies, every man-made order is packed with internal contradictions. Cultures are constantly trying to reconcile these contradictions, and this process fuels change.
For instance, in medieval Europe the nobility believed in both Christianity and chivalry. A typical nobleman went to church in the morning, and listened as the priest held forth on the lives of the saints. ‘Vanity of vanities,’ said the priest, ‘all is vanity. Riches, lust and honour are dangerous temptations. You must rise above them, and follow in Christ’s footsteps. Be meek like Him, avoid violence and extravagance, and if attacked – just turn the other cheek.’ Returning home in a meek and pensive mood, the nobleman would change into his best silks and go to a banquet in his lord’s castle. There the wine flowed like water, the minstrel sang of Lancelot and Guinevere, and the guests exchanged dirty jokes and bloody war tales. ‘It is better to die,’ declared the barons, ‘than to live with shame. If someone questions your honour, only blood can wipe out the insult. And what is better in life than to see your enemies flee before you, and their pretty daughters tremble at your feet?’
The contradiction was never fully resolved. But as the European nobility, clergy and commoners grappled with it, their culture changed. One attempt to figure it out produced the Crusades. On crusade, knights could demonstrate their military prowess and their religious devotion at one stroke. The same contradiction produced military orders such as the Templars and Hospitallers, who tried to mesh Christian and chivalric ideals even more tightly. It was also responsible for a large part of medieval art and literature, such as the tales of King Arthur and the Holy Grail. What was Camelot but an attempt to prove that a good knight can and should be a good Christian, and that good Christians make the best knights?
Another example is the modern political order. Ever since the French Revolution, people throughout the world have gradually come to see both equality and individual freedom as fundamental values. Yet the two values contradict each other. Equality can be ensured only by curtailing the freedoms of those who are better off. Guaranteeing that every individual will be free to do as he wishes inevitably short-changes equality. The entire political history of the world since 1789 can be seen as a series of attempts to reconcile this contradiction.
Anyone who has read a novel by Charles Dickens knows that the liberal regimes of nineteenth-century Europe gave priority to individual freedom even if it meant throwing insolvent poor families in prison and giving orphans little choice but to join schools for pickpockets. Anyone who has read a novel by Alexander Solzhenitsyn knows how Communism’s egalitarian ideal produced brutal tyrannies that tried to control every aspect of daily life.
Contemporary American politics also revolve around this contradiction. Democrats want a more equitable society, even if it means raising taxes to fund programmes to help the poor, elderly and infirm. But that infringes on the freedom of individuals to spend their money as they wish. Why should the government force me to buy health insurance if I prefer using the money to put my kids through college? Republicans, on the other hand, want to maximise individual freedom, even if it means that the income gap between rich and poor will grow wider and that many Americans will not be able to afford health care.
Just as medieval culture did not manage to square chivalry with Christianity, so the modern world fails to square liberty with equality. But this is no defect. Such contradictions are an inseparable part of every human culture. In fact, they are culture’s engines, responsible for the creativity and dynamism of our species. Just as when two clashing musical notes played together force a piece of music forward, so discord in our thoughts, ideas and values compel us to think, re-evaluate and criticise. Consistency is the playground of dull minds.
If tensions, conflicts and irresolvable dilemmas are the spice of every culture, a human being who belongs to any particular culture must hold contradictory beliefs and be riven by incompatible values. It’s such an essential feature of any culture that it even has a name: cognitive dissonance. Cognitive dissonance is often considered a failure of the human psyche. In fact, it is a vital asset. Had people been unable to hold contradictory beliefs and values, it would probably have been impossible to establish and maintain any human culture.
If, say, a Christian really wants to understand the Muslims who attend that mosque down the street, he shouldn’t look for a pristine set of values that every Muslim holds dear. Rather, he should enquire into the catch-22s of Muslim culture, those places where rules are at war and standards scuffle. It’s at the very spot where the Muslims teeter between two imperatives that you’ll understand them best.
Human cultures are in constant flux. Is this flux completely random, or does it have some overall pattern? In other words, does history have a direction?
The answer is yes. Over the millennia, small, simple cultures gradually coalesce into bigger and more complex civilisations, so that the world contains fewer and fewer mega-cultures, each of which is bigger and more complex. This is of course a very crude generalisation, true only at the macro level. At the micro level, it seems that for every group of cultures that coalesces into a mega-culture, there’s a mega-culture that breaks up into pieces. The Mongol Empire expanded to dominate a huge swathe of Asia and even parts of Europe, only to shatter into fragments. Christianity converted hundreds of millions of people at the same time that it splintered into innumerable sects. The Latin language spread through western and central Europe, then split into local dialects that themselves eventually became national languages. But these break-ups are temporary reversals in an inexorable trend towards unity.
Perceiving the direction of history is really a question of vantage point. When we adopt the proverbial bird’s-eye view of history, which examines developments in terms of decades or centuries, it’s hard to say whether history moves in the direction of unity or of diversity. However, to understand long-term processes the bird’s-eye view is too myopic. We would do better to adopt instead the viewpoint of a cosmic spy satellite, which scans millennia rather than centuries. From such a vantage point it becomes crystal clear that history is moving relentlessly towards unity. The sectioning of Christianity and the collapse of the Mongol Empire are just speed bumps on history’s highway.
The best way to appreciate the general direction of history is to count the number of separate human worlds that coexisted at any given moment on planet Earth. Today, we are used to thinking about the whole planet as a single unit, but for most of history, earth was in fact an entire galaxy of isolated human worlds.
Consider Tasmania, a medium-sized island south of Australia. It was cut off from the Australian mainland in about 10,000 BC as the end of the Ice Age caused the sea level to rise. A few thousand hunter-gatherers were left on the island, and had no contact with any other humans until the arrival of the Europeans in the nineteenth century. For 12,000 years, nobody else knew the Tasmanians were there, and they didn’t know that there was anyone else in the world. They had their wars, political struggles, social oscillations and cultural developments. Yet as far as the emperors of China or the rulers of Mesopotamia were concerned, Tasmania could just as well have been located on one of Jupiter’s moons. The Tasmanians lived in a world of their own.
America and Europe, too, were separate worlds for most of their histories. In AD 378, the Roman emperor Valence was defeated and killed by the Goths at the battle of Adrianople. In the same year, King Chak Tok Ich’aak of Tikal was defeated and killed by the army of Teotihuacan. (Tikal was an important Mayan city state, while Teotihuacan was then the largest city in America, with almost 250,000 inhabitants – of the same order of magnitude as its contemporary, Rome.) There was absolutely no connection between the defeat of Rome and the rise of Teotihuacan. Rome might just as well have been located on Mars, and Teotihuacan on Venus.
How many different human worlds coexisted on earth? Around 10,000 BC our planet contained many thousands of them. By 2000 BC, their numbers had dwindled to the hundreds, or at most a few thousand. By AD 1450, their numbers had declined even more drastically. At that time, just prior to the age of European exploration, earth still contained a significant number of dwarf worlds such as Tasmania. But close to 90 percent of humans lived in a single mega-world: the world of Afro-Asia. Most of Asia, most of Europe, and most of Africa (including substantial chunks of sub-Saharan Africa) were already connected by significant cultural, political and economic ties.
Most of the remaining tenth of the world’s human population was divided between four worlds of considerable size and complexity:
1. The Mesoamerican World, which encompassed most of Central America and parts of North America.
2. The Andean World, which encompassed most of western South America.
3. The Australian World, which encompassed the continent of Australia.
4. The Oceanic World, which encompassed most of the islands of the south-western Pacific Ocean, from Hawaii to New Zealand.
Over the next 300 years, the Afro-Asian giant swallowed up all the other worlds. It consumed the Mesoamerican World in 1521, when the Spanish conquered the Aztec Empire. It took its first bite out of the Oceanic World at the same time, during Ferdinand Magellan’s circumnavigation of the globe, and soon after that completed its conquest. The Andean World collapsed in 1532, when Spanish conquistadors crushed the Inca Empire. The first European landed on the Australian continent in 1606, and that pristine world came to an end when British colonisation began in earnest in 1788. Fifteen years later the Britons established their first settlement in Tasmania, thus bringing the last autonomous human world into the Afro-Asian sphere of influence.
It took the Afro-Asian giant several centuries to digest all that it had swallowed, but the process was irreversible. Today almost all humans share the same geopolitical system (the entire planet is divided into internationally recognised states); the same economic system (capitalist market forces shape even the remotest corners of the globe); the same legal system (human rights and international law are valid everywhere, at least theoretically); and the same scientific system (experts in Iran, Israel, Australia and Argentina have exactly the same views about the structure of atoms or the treatment of tuberculosis).
The single global culture is not homogeneous. Just as a single organic body contains many different kinds of organs and cells, so our single global culture contains many different types of lifestyles and people, from New York stockbrokers to Afghan shepherds. Yet they are all closely connected and they influence one another in myriad ways. They still argue and fight, but they argue using the same concepts and fight using the same weapons. A real ‘clash of civilisations’ is like the proverbial dialogue of the deaf. Nobody can grasp what the other is saying. Today when Iran and the United States rattle swords at one another, they both speak the language of nation states, capitalist economies, international rights and nuclear physics.
Map 3. Earth in AD 1450. The named locations within the Afro-Asian World were places visited by the fourteenth-century Muslim traveller Ibn Battuta. A native of Tangier, in Morocco, Ibn Battuta visited Timbuktu, Zanzibar, southern Russia, Central Asia, India, China and Indonesia. His travels illustrate the unity of Afro-Asia on the eve of the modern era.
{Maps by Neil Gower}
We still talk a lot about ‘authentic’ cultures, but if by ‘authentic’ we mean something that developed independently, and that consists of ancient local traditions free of external influences, then there are no authentic cultures left on earth. Over the last few centuries, all cultures were changed almost beyond recognition by a flood of global influences.
One of the most interesting examples of this globalisation is ‘ethnic’ cuisine. In an Italian restaurant we expect to find spaghetti in tomato sauce; in Polish and Irish restaurants lots of potatoes; in an Argentinian restaurant we can choose between dozens of kinds of beefsteaks; in an Indian restaurant hot chillies are incorporated into just about everything; and the highlight at any Swiss café is thick hot chocolate under an alp of whipped cream. But none of these foods is native to those nations. Tomatoes, chilli peppers and cocoa are all Mexican in origin; they reached Europe and Asia only after the Spaniards conquered Mexico. Julius Caesar and Dante Alighieri never twirled tomato-drenched spaghetti on their forks (even forks hadn’t been invented yet), William Tell never tasted chocolate, and Buddha never spiced up his food with chilli. Potatoes reached Poland and Ireland no more than 400 years ago. The only steak you could obtain in Argentina in 1492 was from a llama.
Hollywood films have perpetuated an image of the Plains Indians as brave horsemen, courageously charging the wagons of European pioneers to protect the customs of their ancestors. However, these Native American horsemen were not the defenders of some ancient, authentic culture. Instead, they were the product of a major military and political revolution that swept the plains of western North America in the seventeenth and eighteenth centuries, a consequence of the arrival of European horses. In 1492 there were no horses in America. The culture of the nineteenth-century Sioux and Apache has many appealing features, but it was a modern culture – a result of global forces – much more than ‘authentic’.
From a practical perspective, the most important stage in the process of global unification occurred in the last few centuries, when empires grew and trade intensified. Ever-tightening links were formed between the people of Afro-Asia, America, Australia and Oceania. Thus Mexican chilli peppers made it into Indian food and Spanish cattle began grazing in Argentina. Yet from an ideological perspective, an even more important development occurred during the first millennium BC, when the idea of a universal order took root. For thousands of years previously, history was already moving slowly in the direction of global unity, but the idea of a universal order governing the entire world was still alien to most people.
25. Sioux chiefs (1905). Neither the Sioux nor any other Great Plains tribe had horses prior to 1492.
{© Universal History Archive/UIG/The Bridgeman Art Library.}
Homo sapiens evolved to think of people as divided into us and them. ‘Us’ was the group immediately around you, whoever you were, and ‘them’ was everyone else. In fact, no social animal is ever guided by the interests of the entire species to which it belongs. No chimpanzee cares about the interests of the chimpanzee species, no snail will lift a tentacle for the global snail community, no lion alpha male makes a bid for becoming the king of all lions, and at the entrance of no beehive can one find the slogan: ‘Worker bees of the world – unite!’
But beginning with the Cognitive Revolution, Homo sapiens became more and more exceptional in this respect. People began to cooperate on a regular basis with complete strangers, whom they imagined as ‘brothers’ or ‘friends’. Yet this brotherhood was not universal. Somewhere in the next valley, or beyond the mountain range, one could still sense ‘them’. When the first pharaoh, Menes, united Egypt around 3000 BC, it was clear to the Egyptians that Egypt had a border, and beyond the border lurked ‘barbarians’. The barbarians were alien, threatening, and interesting only to the extent that they had land or natural resources that the Egyptians wanted. All the imagined orders people created tended to ignore a substantial part of humankind.
The first millennium BC witnessed the appearance of three potentially universal orders, whose devotees could for the first time imagine the entire world and the entire human race as a single unit governed by a single set of laws. Everyone was ‘us’, at least potentially. There was no longer ‘them’. The first universal order to appear was economic: the monetary order. The second universal order was political: the imperial order. The third universal order was religious: the order of universal religions such as Buddhism, Christianity and Islam.
Merchants, conquerors and prophets were the first people who managed to transcend the binary evolutionary division, ‘us vs them’, and to foresee the potential unity of humankind. For the merchants, the entire world was a single market and all humans were potential customers. They tried to establish an economic order that would apply to all, everywhere. For the conquerors, the entire world was a single empire and all humans were potential subjects, and for the prophets, the entire world held a single truth and all humans were potential believers. They too tried to establish an order that would be applicable for everyone everywhere.
During the last three millennia, people made more and more ambitious attempts to realise that global vision. The next three chapters discuss how money, empires and universal religions spread, and how they laid the foundation of the united world of today. We begin with the story of the greatest conqueror in history, a conqueror possessed of extreme tolerance and adaptability, thereby turning people into ardent disciples. This conqueror is money. People who do not believe in the same god or obey the same king are more than willing to use the same money. Osama Bin Laden, for all his hatred of American culture, American religion and American politics, was very fond of American dollars. How did money succeed where gods and kings failed?
IN 1519 HERNÁN CORTÉS AND HIS CONQUISTADORS invaded Mexico, hitherto an isolated human world. The Aztecs, as the people who lived there called themselves, quickly noticed that the aliens showed an extraordinary interest in a certain yellow metal. In fact, they never seemed to stop talking about it. The natives were not unfamiliar with gold – it was pretty and easy to work, so they used it to make jewellery and statues, and they occasionally used gold dust as a medium of exchange. But when an Aztec wanted to buy something, he generally paid in cocoa beans or bolts of cloth. The Spanish obsession with gold thus seemed inexplicable. What was so important about a metal that could not be eaten, drunk or woven, and was too soft to use for tools or weapons? When the natives questioned Cortés as to why the Spaniards had such a passion for gold, the conquistador answered, ‘Because I and my companions suffer from a disease of the heart which can be cured only with gold.’1
In the Afro-Asian world from which the Spaniards came, the obsession for gold was indeed an epidemic. Even the bitterest of enemies lusted after the same useless yellow metal. Three centuries before the conquest of Mexico, the ancestors of Cortés and his army waged a bloody war of religion against the Muslim kingdoms in Iberia and North Africa. The followers of Christ and the followers of Allah killed each other by the thousands, devastated fields and orchards, and turned prosperous cities into smouldering ruins – all for the greater glory of Christ or Allah.
As the Christians gradually gained the upper hand, they marked their victories not only by destroying mosques and building churches, but also by issuing new gold and silver coins bearing the sign of the cross and thanking God for His help in combating the infidels. Yet alongside the new currency, the victors minted another type of coin, called the millares, which carried a somewhat different message. These square coins made by the Christian conquerors were emblazoned with flowing Arabic script that declared: ‘There is no god except Allah, and Muhammad is Allah’s messenger.’ Even the Catholic bishops of Melgueil and Agde issued these faithful copies of popular Muslim coins, and God-fearing Christians happily used them.2
Tolerance flourished on the other side of the hill too. Muslim merchants in North Africa conducted business using Christian coins such as the Florentine florin, the Venetian ducat and the Neapolitan gigliato. Even Muslim rulers who called for jihad against the infidel Christians were glad to receive taxes in coins that invoked Christ and His Virgin Mother.3
Hunter-gatherers had no money. Each band hunted, gathered and manufactured almost everything it required, from meat to medicine, from sandals to sorcery. Different band members may have specialised in different tasks, but they shared their goods and services through an economy of favours and obligations. A piece of meat given for free would carry with it the assumption of reciprocity – say, free medical assistance. The band was economically independent; only a few rare items that could not be found locally – seashells, pigments, obsidian and the like – had to be obtained from strangers. This could usually be done by simple barter: ‘We’ll give you pretty seashells, and you’ll give us high-quality flint.’
Little of this changed with the onset of the Agricultural Revolution. Most people continued to live in small, intimate communities. Much like a hunter-gatherer band, each village was a self-sufficient economic unit, maintained by mutual favours and obligations plus a little barter with outsiders. One villager may have been particularly adept at making shoes, another at dispensing medical care, so villagers knew where to turn when barefoot or sick. But villages were small and their economies limited, so there could be no full-time shoemakers and doctors.
The rise of cities and kingdoms and the improvement in transport infrastructure brought about new opportunities for specialisation. Densely populated cities provided full-time employment not just for professional shoemakers and doctors, but also for carpenters, priests, soldiers and lawyers. Villages that gained a reputation for producing really good wine, olive oil or ceramics discovered that it was worth their while to specialise nearly exclusively in that product and trade it with other settlements for all the other goods they needed. This made a lot of sense. Climates and soils differ, so why drink mediocre wine from your backyard if you can buy a smoother variety from a place whose soil and climate is much better suited to grape vines? If the clay in your backyard makes stronger and prettier pots, then you can make an exchange. Furthermore, full-time specialist vintners and potters, not to mention doctors and lawyers, can hone their expertise to the benefit of all. But specialisation created a problem – how do you manage the exchange of goods between the specialists?
An economy of favours and obligations doesn’t work when large numbers of strangers try to cooperate. It’s one thing to provide free assistance to a sister or a neighbour, a very different thing to take care of foreigners who might never reciprocate the favour. One can fall back on barter. But barter is effective only when exchanging a limited range of products. It cannot form the basis for a complex economy.4
In order to understand the limitations of barter, imagine that you own an apple orchard in the hill country that produces the crispest, sweetest apples in the entire province. You work so hard in your orchard that your shoes wear out. So you harness up your donkey cart and head to the market town down by the river. Your neighbour told you that a shoemaker on the south end of the marketplace made him a really sturdy pair of boots that’s lasted him through five seasons. You find the shoemaker’s shop and offer to barter some of your apples in exchange for the shoes you need.
The shoemaker hesitates. How many apples should he ask for in payment? Every day he encounters dozens of customers, a few of whom bring along sacks of apples, while others carry wheat, goats or cloth – all of varying quality. Still others offer their expertise in petitioning the king or curing backaches. The last time the shoemaker exchanged shoes for apples was three months ago, and back then he asked for three sacks of apples. Or was it four? But come to think of it, those apples were sour valley apples, rather than prime hill apples. On the other hand, on that previous occasion, the apples were given in exchange for small women’s shoes. This fellow is asking for man-size boots. Besides, in recent weeks a disease has decimated the flocks around town, and skins are becoming scarce. The tanners are starting to demand twice as many finished shoes in exchange for the same quantity of leather. Shouldn’t that be taken into consideration?
In a barter economy, every day the shoemaker and the apple grower will have to learn anew the relative prices of dozens of commodities. If one hundred different commodities are traded in the market, then buyers and sellers will have to know 4,950 different exchange rates. And if 1,000 different commodities are traded, buyers and sellers must juggle 499,500 different exchange rates!5 How do you figure it out?
It gets worse. Even if you manage to calculate how many apples equal one pair of shoes, barter is not always possible. After all, a trade requires that each side want what the other has to offer. What happens if the shoemaker doesn’t like apples and, if at the moment in question, what he really wants is a divorce? True, the farmer could look for a lawyer who likes apples and set up a three-way deal. But what if the lawyer is full up on apples but really needs a haircut?
Some societies tried to solve the problem by establishing a central barter system that collected products from specialist growers and manufacturers and distributed them to those who needed them. The largest and most famous such experiment was conducted in the Soviet Union, and it failed miserably. ‘Everyone would work according to their abilities, and receive according to their needs’ turned out in practice into ‘everyone would work as little as they can get away with, and receive as much as they could grab’. More moderate and more successful experiments were made on other occasions, for example in the Inca Empire. Yet most societies found a more easy way to connect large numbers of experts – they developed money.
Money was created many times in many places. Its development required no technological breakthroughs – it was a purely mental revolution. It involved the creation of a new inter-subjective reality that exists solely in people’s shared imagination.
Money is not coins and banknotes. Money is anything that people are willing to use in order to represent systematically the value of other things for the purpose of exchanging goods and services. Money enables people to compare quickly and easily the value of different commodities (such as apples, shoes and divorces), to easily exchange one thing for another, and to store wealth conveniently. There have been many types of money. The most familiar is the coin, which is a standardised piece of imprinted metal. Yet money existed long before the invention of coinage, and cultures have prospered using other things as currency, such as shells, cattle, skins, salt, grain, beads, cloth and promissory notes. Cowry shells were used as money for about 4,000 years all over Africa, South Asia, East Asia and Oceania. Taxes could still be paid in cowry shells in British Uganda in the early twentieth century.
26. In ancient Chinese script the cowry-shell sign represented money, in words such as ‘to sell’ or ‘reward’.
{Illustration based on: Joe Cribb (ed.), Money: From Cowrie Shells to Credit Cards (London: Published for the Trustees of the British Museum by British Museum Publications, 1986), 27.}
In modern prisons and POW camps, cigarettes have often served as money. Even non-smoking prisoners have been willing to accept cigarettes in payment, and to calculate the value of all other goods and services in cigarettes. One Auschwitz survivor described the cigarette currency used in the camp: ‘We had our own currency, whose value no one questioned: the cigarette. The price of every article was stated in cigarettes . . . In “normal” times, that is, when the candidates to the gas chambers were coming in at a regular pace, a loaf of bread cost twelve cigarettes; a 10-ounce package of margarine, thirty; a watch, eighty to 200; a 0.25-gallon bottle of alcohol, 400 cigarettes!’6
In fact, even today coins and banknotes are a rare form of money. The sum total of money in the world is about $60 trillion, yet the sum total of coins and banknotes is less than $6 trillion.7 More than 90 percent of all money – more than $50 trillion appearing in our accounts – exists only on computer servers. Accordingly, most business transactions are executed by moving electronic data from one computer file to another, without any exchange of physical cash. Only a criminal buys a house, for example, by handing over a suitcase full of banknotes. As long as people are willing to trade goods and services in exchange for electronic data, it’s even better than shiny coins and crisp banknotes – lighter, less bulky, and easier to keep track of.
For complex commercial systems to function, some kind of money is indispensable. A shoemaker in a money economy needs to know only the prices charged for various kinds of shoes – there is no need to memorise the exchange rates between shoes and apples or goats. Money also frees apple experts from the need to search out apple-craving shoemakers, because everyone always wants money. This is perhaps its most basic quality. Everyone always wants money because everyone else also always wants money, which means you can exchange money for whatever you want or need. The shoemaker will always be happy to take your money, because no matter what he really wants – apples, goats or a divorce – he can get it in exchange for money.
Money is thus a universal medium of exchange that enables people to convert almost everything into almost anything else. Brawn gets converted to brain when a discharged soldier finances his college tuition with his military benefits. Land gets converted into loyalty when a baron sells property to support his retainers. Health is converted to justice when a physician uses her fees to hire a lawyer – or bribe a judge. It is even possible to convert sex into salvation, as fifteenth-century prostitutes did when they slept with men for money, which they in turn used to buy indulgences from the Catholic Church.
Ideal types of money enable people not merely to turn one thing into another, but to store wealth as well. Many valuables cannot be stored – such as time or beauty. Some things can be stored only for a short time, such as strawberries. Other things are more durable, but take up a lot of space and require expensive facilities and care. Grain, for example, can be stored for years, but to do so you need to build huge storehouses and guard against rats, mould, water, fire and thieves. Money, whether paper, computer bits or cowry shells, solves these problems. Cowry shells don’t rot, are unpalatable to rats, can survive fires and are compact enough to be locked up in a safe.
In order to use wealth it is not enough just to store it. It often needs to be transported from place to place. Some forms of wealth, such as real estate, cannot be transported at all. Commodities such as wheat and rice can be transported only with difficulty. Imagine a wealthy farmer living in a moneyless land who emigrates to a distant province. His wealth consists mainly of his house and rice paddies. The farmer cannot take with him the house or the paddies. He might exchange them for tons of rice, but it would be very burdensome and expensive to transport all that rice. Money solves these problems. The farmer can sell his property in exchange for a sack of cowry shells, which he can easily carry wherever he goes.
Because money can convert, store and transport wealth easily and cheaply, it made a vital contribution to the appearance of complex commercial networks and dynamic markets. Without money, commercial networks and markets would have been doomed to remain very limited in their size, complexity and dynamism.
Cowry shells and dollars have value only in our common imagination. Their worth is not inherent in the chemical structure of the shells and paper, or their colour, or their shape. In other words, money isn’t a material reality – it is a psychological construct. It works by converting matter into mind. But why does it succeed? Why should anyone be willing to exchange a fertile rice paddy for a handful of useless cowry shells? Why are you willing to flip hamburgers, sell health insurance or babysit three obnoxious brats when all you get for your exertions is a few pieces of coloured paper?
People are willing to do such things when they trust the figments of their collective imagination. Trust is the raw material from which all types of money are minted. When a wealthy farmer sold his possessions for a sack of cowry shells and travelled with them to another province, he trusted that upon reaching his destination other people would be willing to sell him rice, houses and fields in exchange for the shells. Money is accordingly a system of mutual trust, and not just any system of mutual trust: money is the most universal and most efficient system of mutual trust ever devised.
What created this trust was a very complex and long-term network of political, social and economic relations. Why do I believe in the cowry shell or gold coin or dollar bill? Because my neighbours believe in them. And my neighbours believe in them because I believe in them. And we all believe in them because our king believes in them and demands them in taxes, and because our priest believes in them and demands them in tithes. Take a dollar bill and look at it carefully. You will see that it is simply a colourful piece of paper with the signature of the US secretary of the treasury on one side, and the slogan ‘In God We Trust’ on the other. We accept the dollar in payment, because we trust in God and the US secretary of the treasury. The crucial role of trust explains why our financial systems are so tightly bound up with our political, social and ideological systems, why financial crises are often triggered by political developments, and why the stock market can rise or fall depending on the way traders feel on a particular morning.
Initially, when the first versions of money were created, people didn’t have this sort of trust, so it was necessary to define as ‘money’ things that had real intrinsic value. History’s first known money – Sumerian barley money – is a good example. It appeared in Sumer around 3000 BC, at the same time and place, and under the same circumstances, in which writing appeared. Just as writing developed to answer the needs of intensifying administrative activities, so barley money developed to answer the needs of intensifying economic activities.
Barley money was simply barley – fixed amounts of barley grains used as a universal measure for evaluating and exchanging all other goods and services. The most common measurement was the sila, equivalent to roughly 0.25 gallons. Standardised bowls, each capable of containing one sila, were mass-produced so that whenever people needed to buy or sell anything, it was easy to measure the necessary amounts of barley. Salaries, too, were set and paid in silas of barley. A male labourer earned sixty silas a month, a female labourer thirty silas. A foreman could earn between 1,200 and 5,000 silas. Not even the most ravenous foreman could eat 1,250 gallons of barley a month, but he could use the silas he didn’t eat to buy all sorts of other commodities – oil, goats, slaves, and something else to eat besides barley.8
Even though barley has intrinsic value, it was not easy to convince people to use it as money rather than as just another commodity. In order to understand why, just think what would happen if you took a sack full of barley to your local shopping centre, and tried to buy a shirt or a pizza. The vendors would probably call security. Still, it was somewhat easier to build trust in barley as the first type of money, because barley has an inherent biological value. Humans can eat it. On the other hand, it was difficult to store and transport barley. The real breakthrough in monetary history occurred when people gained trust in money that lacked inherent value, but was easier to store and transport. Such money appeared in ancient Mesopotamia in the middle of the third millennium BC. This was the silver shekel.
The silver shekel was not a coin, but rather 0.3 ounces of silver. When Hammurabi’s Code declared that a superior man who killed a slave woman must pay her owner twenty silver shekels, it meant that he had to pay 6 ounces of silver, not twenty coins. Most monetary terms in the Old Testament are given in terms of silver rather than coins. Joseph’s brothers sold him to the Ishmaelites for twenty silver shekels, or rather 6 ounces of silver (the same price as a slave woman – he was a youth, after all).
Unlike the barley sila, the silver shekel had no inherent value. You cannot eat, drink or clothe yourself in silver, and it’s too soft for making useful tools – ploughshares or swords of silver would crumple almost as fast as ones made out of aluminium foil. When they are used for anything, silver and gold are made into jewellery, crowns and other status symbols – luxury goods that members of a particular culture identify with high social status. Their value is purely cultural.
Set weights of precious metals eventually gave birth to coins. The first coins in history were struck around 640 BC by King Alyattes of Lydia, in western Anatolia. These coins had a standardised weight of gold or silver, and were imprinted with an identification mark. The mark testified to two things. First, it indicated how much precious metal the coin contained. Second, it identified the authority that issued the coin and that guaranteed its contents. Almost all coins in use today are descendants of the Lydian coins.
Coins had two important advantages over unmarked metal ingots. First, the latter had to be weighed for every transaction. Second, weighing the ingot is not enough. How does the shoemaker know that the silver ingot I put down for my boots is really made of pure silver, and not of lead covered on the outside by a thin silver coating? Coins help solve these problems. The mark imprinted on them testifies to their exact value, so the shoemaker doesn’t have to keep a scale on his cash register. More importantly, the mark on the coin is the signature of some political authority that guarantees the coin’s value.
The shape and size of the mark varied tremendously throughout history, but the message was always the same: ‘I, the Great King So-And-So, give you my personal word that this metal disc contains exactly 0.2 ounces of gold. If anyone dares counterfeit this coin, it means he is fabricating my own signature, which would be a blot on my reputation. I will punish such a crime with the utmost severity.’ That’s why counterfeiting money has always been considered a much more serious crime than other acts of deception. Counterfeiting is not just cheating – it’s a breach of sovereignty, an act of subversion against the power, privileges and person of the king. The legal term is lese-majesty (violating majesty), and was typically punished by torture and death. As long as people trusted the power and integrity of the king, they trusted his coins. Total strangers could easily agree on the worth of a Roman denarius coin, because they trusted the power and integrity of the Roman emperor, whose name and picture adorned it.
27. One of the earliest coins in history, from Lydia of the seventh century BC.
{© akg/Bible Land Pictures.}
In turn, the power of the emperor rested on the denarius. Just think how difficult it would have been to maintain the Roman Empire without coins – if the emperor had to raise taxes and pay salaries in barley and wheat. It would have been impossible to collect barley taxes in Syria, transport the funds to the central treasury in Rome, and transport them again to Britain in order to pay the legions there. It would have been equally difficult to maintain the empire if the inhabitants of the city of Rome believed in gold coins, but the subject populations rejected this belief, putting their trust instead in cowry shells, ivory beads or rolls of cloth.
The trust in Rome’s coins was so strong that even outside the empire’s borders, people were happy to receive payment in denarii. In the first century AD, Roman coins were an accepted medium of exchange in the markets of India, even though the closest Roman legion was thousands of miles away. The Indians had such a strong confidence in the denarius and the image of the emperor that when local rulers struck coins of their own they closely imitated the denarius, down to the portrait of the Roman emperor! The name ‘denarius’ became a generic name for coins. Muslim caliphs Arabicised this name and issued ‘dinars’. The dinar is still the official name of the currency in Jordan, Iraq, Serbia, Macedonia, Tunisia and several other countries.
As Lydian-style coinage was spreading from the Mediterranean to the Indian Ocean, China developed a slightly different monetary system, based on bronze coins and unmarked silver and gold ingots. Yet the two monetary systems had enough in common (especially the reliance on gold and silver) that close monetary and commercial relations were established between the Chinese zone and the Lydian zone. Muslim and European merchants and conquerors gradually spread the Lydian system and the gospel of gold to the far corners of the earth. By the late modern era the entire world was a single monetary zone, relying first on gold and silver, and later on a few trusted currencies such as the British pound and the American dollar.
The appearance of a single transnational and transcultural monetary zone laid the foundation for the unification of Afro-Asia, and eventually of the entire globe, into a single economic and political sphere. People continued to speak mutually incomprehensible languages, obey different rulers and worship distinct gods, but all believed in gold and silver and in gold and silver coins. Without this shared belief, global trading networks would have been virtually impossible. The gold and silver that sixteenth-century conquistadors found in America enabled European merchants to buy silk, porcelain and spices in East Asia, thereby moving the wheels of economic growth in both Europe and East Asia. Most of the gold and silver mined in Mexico and the Andes slipped through European fingers to find a welcome home in the purses of Chinese silk and porcelain manufacturers. What would have happened to the global economy if the Chinese hadn’t suffered from the same ‘disease of the heart’ that afflicted Cortés and his companions – and had refused to accept payment in gold and silver?
Yet why should Chinese, Indians, Muslims and Spaniards – who belonged to very different cultures that failed to agree about much of anything – nevertheless share the belief in gold? Why didn’t it happen that Spaniards believed in gold, while Muslims believed in barley, Indians in cowry shells, and Chinese in rolls of silk? Economists have a ready answer. Once trade connects two areas, the forces of supply and demand tend to equalise the prices of transportable goods. In order to understand why, consider a hypothetical case. Assume that when regular trade opened between India and the Mediterranean, Indians were uninterested in gold, so it was almost worthless. But in the Mediterranean, gold was a coveted status symbol, hence its value was high. What would happen next?
Merchants travelling between India and the Mediterranean would notice the difference in the value of gold. In order to make a profit, they would buy gold cheaply in India and sell it dearly in the Mediterranean. Consequently, the demand for gold in India would skyrocket, as would its value. At the same time the Mediterranean would experience an influx of gold, whose value would consequently drop. Within a short time the value of gold in India and the Mediterranean would be quite similar. The mere fact that Mediterranean people believed in gold would cause Indians to start believing in it as well. Even if Indians still had no real use for gold, the fact that Mediterranean people wanted it would be enough to make the Indians value it.
Similarly, the fact that another person believes in cowry shells, or dollars, or electronic data, is enough to strengthen our own belief in them, even if that person is otherwise hated, despised or ridiculed by us. Christians and Muslims who could not agree on religious beliefs could nevertheless agree on a monetary belief, because whereas religion asks us to believe in something, money asks us to believe that other people believe in something.
For thousands of years, philosophers, thinkers and prophets have besmirched money and called it the root of all evil. Be that as it may, money is also the apogee of human tolerance. Money is more open-minded than language, state laws, cultural codes, religious beliefs and social habits. Money is the only trust system created by humans that can bridge almost any cultural gap, and that does not discriminate on the basis of religion, gender, race, age or sexual orientation. Thanks to money, even people who don’t know each other and don’t trust each other can nevertheless cooperate effectively.
Money is based on two universal principles:
a. Universal convertibility: with money as an alchemist, you can turn land into loyalty, justice into health, and violence into knowledge.
b. Universal trust: with money as a go-between, any two people can cooperate on any project.
These principles have enabled millions of strangers to cooperate effectively in trade and industry. But these seemingly benign principles have a dark side. When everything is convertible, and when trust depends on anonymous coins and cowry shells, it corrodes local traditions, intimate relations and human values, replacing them with the cold laws of supply and demand.
Human communities and families have always been based on belief in ‘priceless’ things, such as honour, loyalty, morality and love. These things lie outside the domain of the market, and they shouldn’t be bought or sold for money. Even if the market offers a good price, certain things just aren’t done. Parents mustn’t sell their children into slavery; a devout Christian must not commit a mortal sin; a loyal knight must never betray his lord; and ancestral tribal lands shall never be sold to foreigners.
Money has always tried to break through these barriers, like water seeping through cracks in a dam. Parents have been reduced to selling some of their children into slavery in order to buy food for the others. Devout Christians have murdered, stolen and cheated – and later used their spoils to buy forgiveness from the church. Ambitious knights auctioned their allegiance to the highest bidder, while securing the loyalty of their own followers by cash payments. Tribal lands were sold to foreigners from the other side of the world in order to purchase an entry ticket into the global economy.
Money has an even darker side. For although money builds universal trust between strangers, this trust is invested not in humans, communities or sacred values, but in money itself and in the impersonal systems that back it. We do not trust the stranger, or the next-door neighbour – we trust the coin they hold. If they run out of coins, we run out of trust. As money brings down the dams of community, religion and state, the world is in danger of becoming one big and rather heartless marketplace.
Hence the economic history of humankind is a delicate dance. People rely on money to facilitate cooperation with strangers, but they’re afraid it will corrupt human values and intimate relations. With one hand people willingly destroy the communal dams that held at bay the movement of money and commerce for so long. Yet with the other hand they build new dams to protect society, religion and the environment from enslavement to market forces.
It is common nowadays to believe that the market always prevails, and that the dams erected by kings, priests and communities cannot long hold back the tides of money. This is naïve. Brutal warriors, religious fanatics and concerned citizens have repeatedly managed to trounce calculating merchants, and even to reshape the economy. It is therefore impossible to understand the unification of humankind as a purely economic process. In order to understand how thousands of isolated cultures coalesced over time to form the global village of today, we must take into account the role of gold and silver, but we cannot disregard the equally crucial role of steel.
THE ANCIENT ROMANS WERE USED TO being defeated. Like the rulers of most of history’s great empires, they could lose battle after battle but still win the war. An empire that cannot sustain a blow and remain standing is not really an empire. Yet even the Romans found it hard to stomach the news arriving from northern Iberia in the middle of the second century BC. A small, insignificant mountain town called Numantia, inhabited by the peninsula’s native Celts, had dared to throw off the Roman yoke. Rome at the time was the unquestioned master of the entire Mediterranean basin, having vanquished the Macedonian and Seleucid empires, subjugated the proud city states of Greece, and turned Carthage into a smouldering ruin. The Numantians had nothing on their side but their fierce love of freedom and their inhospitable terrain. Yet they forced legion after legion to surrender or retreat in shame.
Eventually, in 134 BC, Roman patience snapped. The Senate decided to send Scipio Aemilianus, Rome’s foremost general and the man who had levelled Carthage, to take care of the Numantians. He was given a massive army of more than 30,000 soldiers. Scipio, who respected the fighting spirit and martial skill of the Numantians, preferred not to waste his soldiers in unnecessary combat. Instead, he encircled Numantia with a line of fortifications, blocking the town’s contact with the outside world. Hunger did his work for him. After more than a year, the food supply ran out. When the Numantians realised that all hope was lost, they burned down their town; according to Roman accounts, most of them killed themselves so as not to become Roman slaves.
Numantia later became a symbol of Spanish independence and courage. Miguel de Cervantes, the author of Don Quixote, wrote a tragedy called The Siege of Numantia which ends with the town’s destruction, but also with a vision of Spain’s future greatness. Poets composed paeans to its fierce defenders and painters committed majestic depictions of the siege to canvas. In 1882, its ruins were declared a ‘national monument’ and became a pilgrimage site for Spanish patriots. In the 1950s and 1960s, the most popular comic books in Spain weren’t about Superman and Spiderman – they told of the adventures of El Jabato, an imaginary ancient Iberian hero who fought against the Roman oppressors. The ancient Numantians are to this day Spain’s paragons of heroism and patriotism, cast as role models for the country’s young people.
Yet Spanish patriots extol the Numantians in Spanish – a romance language that is a progeny of Scipio’s Latin. The Numantians spoke a now dead and lost Celtic language. Cervantes wrote The Siege of Numantia in Latin script, and the play follows Graeco-Roman artistic models. Numantia had no theatres. Spanish patriots who admire Numantian heroism tend also to be loyal followers of the Roman Catholic Church – don’t miss that first word – a church whose leader still sits in Rome and whose God prefers to be addressed in Latin. Similarly, modern Spanish law derives from Roman law; Spanish politics is built on Roman foundations; and Spanish cuisine and architecture owe a far greater debt to Roman legacies than to those of the Celts of Iberia. Nothing is really left of Numantia save ruins. Even its story has reached us thanks only to the writings of Roman historians. It was tailored to the tastes of Roman audiences which relished tales of freedom-loving barbarians. The victory of Rome over Numantia was so complete that the victors co-opted the very memory of the vanquished.
It’s not our kind of story. We like to see underdogs win. But there is no justice in history. Most past cultures have sooner or later fallen prey to the armies of some ruthless empire, which have consigned them to oblivion. Empires, too, ultimately fall, but they tend to leave behind rich and enduring legacies. Almost all people in the twenty-first century are the offspring of one empire or another.
An empire is a political order with two important characteristics. First, to qualify for that designation you have to rule over a significant number of distinct peoples, each possessing a different cultural identity and a separate territory. How many peoples exactly? Two or three is not sufficient. Twenty or thirty is plenty. The imperial threshold passes somewhere in between.
Second, empires are characterised by flexible borders and a potentially unlimited appetite. They can swallow and digest more and more nations and territories without altering their basic structure or identity. The British state of today has fairly clear borders that cannot be exceeded without altering the fundamental structure and identity of the state. A century ago almost any place on earth could have become part of the British Empire.
Cultural diversity and territorial flexibility give empires not only their unique character, but also their central role in history. It’s thanks to these two characteristics that empires have managed to unite diverse ethnic groups and ecological zones under a single political umbrella, thereby fusing together larger and larger segments of the human species and of planet Earth.
It should be stressed that an empire is defined solely by its cultural diversity and flexible borders, rather than by its origins, its form of government, its territorial extent, or the size of its population. An empire need not emerge from military conquest. The Athenian Empire began its life as a voluntary league, and the Habsburg Empire was born in wedlock, cobbled together by a string of shrewd marriage alliances. Nor must an empire be ruled by an autocratic emperor. The British Empire, the largest empire in history, was ruled by a democracy. Other democratic (or at least republican) empires have included the modern Dutch, French, Belgian and American empires, as well as the premodern empires of Novgorod, Rome, Carthage and Athens.
Size, too, does not really matter. Empires can be puny. The Athenian Empire at its zenith was much smaller in size and population than today’s Greece. The Aztec Empire was smaller than today’s Mexico. Both were nevertheless empires, whereas modern Greece and modern Mexico are not, because the former gradually subdued dozens and even hundreds of different polities while the latter have not. Athens lorded it over more than a hundred formerly independent city states, whereas the Aztec Empire, if we can trust its taxation records, ruled 371 different tribes and peoples.1
How was it possible to squeeze such a human potpourri into the territory of a modest modern state? It was possible because in the past there were many more distinct peoples in the world, each of which had a smaller population and occupied less territory than today’s typical people. The land between the Mediterranean and the Jordan River, which today struggles to satisfy the ambitions of just two peoples, easily accommodated in biblical times dozens of nations, tribes, petty kingdoms and city states.
Empires were one of the main reasons for the drastic reduction in human diversity. The imperial steamroller gradually obliterated the unique characteristics of numerous peoples (such as the Numantians), forging out of them new and much larger groups.
In our time, ‘imperialist’ ranks second only to ‘fascist’ in the lexicon of political swear words. The contemporary critique of empires commonly takes two forms:
1. Empires do not work. In the long run, it is not possible to rule effectively over a large number of conquered peoples.
2. Even if it can be done, it should not be done, because empires are evil engines of destruction and exploitation. Every people has a right to self-determination, and should never be subject to the rule of another.
From a historical perspective, the first statement is plain nonsense, and the second is deeply problematic.
The truth is that empire has been the world’s most common form of political organisation for the last 2,500 years. Most humans during these two and a half millennia have lived in empires. Empire is also a very stable form of government. Most empires have found it alarmingly easy to put down rebellions. In general, they have been toppled only by external invasion or by a split within the ruling elite. Conversely, conquered peoples don’t have a very good record of freeing themselves from their imperial overlords. Most have remained subjugated for hundreds of years. Typically, they have been slowly digested by the conquering empire, until their distinct cultures fizzled out.
For example, when the Western Roman Empire finally fell to invading Germanic tribes in 476 AD, the Numantians, Arverni, Helvetians, Samnites, Lusitanians, Umbrians, Etruscans and hundreds of other forgotten peoples whom the Romans conquered centuries earlier did not emerge from the empire’s eviscerated carcass like Jonah from the belly of the great fish. None of them were left. The biological descendants of the people who had identified themselves as members of those nations, who had spoken their languages, worshipped their gods and told their myths and legends, now thought, spoke and worshipped as Romans.
In many cases, the destruction of one empire hardly meant independence for subject peoples. Instead, a new empire stepped into the vacuum created when the old one collapsed or retreated. Nowhere has this been more obvious than in the Middle East. The current political constellation in that region – a balance of power between many independent political entities with more or less stable borders – is almost without parallel any time in the last several millennia. The last time the Middle East experienced such a situation was in the eighth century BC – almost 3,000 years ago! From the rise of the Neo-Assyrian Empire in the eighth century BC until the collapse of the British and French empires in the mid-twentieth century AD, the Middle East passed from the hands of one empire into the hands of another, like a baton in a relay race. And by the time the British and French finally dropped the baton, the Aramaeans, the Ammonites, the Phoenicians, the Philistines, the Moabites, the Edomites and the other peoples conquered by the Assyrians had long disappeared.
True, today’s Jews, Armenians and Georgians claim with some measure of justice that they are the offspring of ancient Middle Eastern peoples. Yet these are only exceptions that prove the rule, and even these claims are somewhat exaggerated. It goes without saying that the political, economic and social practices of modern Jews, for example, owe far more to the empires under which they lived during the past two millennia than to the traditions of the ancient kingdom of Judaea. If King David were to show up in an ultra-Orthodox synagogue in present-day Jerusalem, he would be utterly bewildered to find people dressed in East European clothes, speaking in a German dialect (Yiddish) and having endless arguments about the meaning of a Babylonian text (the Talmud). There were neither synagogues, volumes of Talmud, nor even Torah scrolls in ancient Judaea.
Building and maintaining an empire usually required the vicious slaughter of large populations and the brutal oppression of every-one who was left. The standard imperial toolkit included wars, enslavement, deportation and genocide. When the Romans invaded Scotland in AD 83, they were met by fierce resistance from local Caledonian tribes, and reacted by laying waste to the country. In reply to Roman peace offers, the chieftain Calgacus called the Romans ‘the ruffians of the world’, and said that ‘to plunder, slaughter and robbery they give the lying name of empire; they make a desert and call it peace’.2
This does not mean, however, that empires leave nothing of value in their wake. To colour all empires black and to disavow all imperial legacies is to reject most of human culture. Imperial elites used the profits of conquest to finance not only armies and forts but also philosophy, art, justice and charity. A significant proportion of humanity’s cultural achievements owe their existence to the exploitation of conquered populations. The profits and prosperity brought by Roman imperialism provided Cicero, Seneca and St Augustine with the leisure and wherewithal to think and write; the Taj Mahal could not have been built without the wealth accumulated by Mughal exploitation of their Indian subjects; and the Habsburg Empire’s profits from its rule over its Slavic, Hungarian and Romanian-speaking provinces paid Haydn’s salaries and Mozart’s commissions. No Caledonian writer preserved Calgacus’ speech for posterity. We know of it thanks to the Roman historian Tacitus. In fact, Tacitus probably made it up. Most scholars today agree that Tacitus not only fabricated the speech but invented the character of Calgacus, the Caledonian chieftain, to serve as a mouthpiece for what he and other upper-class Romans thought about their own country.
Even if we look beyond elite culture and high art, and focus instead on the world of common people, we find imperial legacies in the majority of modern cultures. Today most of us speak, think and dream in imperial languages that were forced upon our ancestors by the sword. Most East Asians speak and dream in the language of the Han Empire. No matter what their origins, nearly all the inhabitants of the two American continents, from Alaska’s Barrow Peninsula to the Straits of Magellan, communicate in one of four imperial languages: Spanish, Portuguese, French or English. Present-day Egyptians speak Arabic, think of themselves as Arabs, and identify wholeheartedly with the Arab Empire that conquered Egypt in the seventh century and crushed with an iron fist the repeated revolts that broke out against its rule. About 10 million Zulus in South Africa hark back to the Zulu age of glory in the nineteenth century, even though most of them descend from tribes who fought against the Zulu Empire, and were incorporated into it only through bloody military campaigns.
The first empire about which we have definitive information was the Akkadian Empire of Sargon the Great (c.2250 BC). Sargon began his career as the king of Kish, a small city state in Mesopotamia. Within a few decades he managed to conquer not only all other Mesopotamian city states, but also large territories outside the Mesopotamian heartland. Sargon boasted that he had conquered the entire world. In reality, his dominion stretched from the Persian Gulf to the Mediterranean, and included most of today’s Iraq and Syria, along with a few slices of modern Iran and Turkey.
The Akkadian Empire did not last long after its founder’s death, but Sargon left behind an imperial mantle that seldom remained unclaimed. For the next 1,700 years, Assyrian, Babylonian and Hittite kings adopted Sargon as a role model, boasting that they, too, had conquered the entire world. Then, around 550 BC, Cyrus the Great of Persia came along with an even more impressive boast.
Map 4. The Akkadian Empire and the Persian Empire.
{Maps by Neil Gower}
The kings of Assyria always remained the kings of Assyria. Even when they claimed to rule the entire world, it was obvious that they were doing it for the greater glory of Assyria, and they were not apologetic about it. Cyrus, on the other hand, claimed not merely to rule the whole world, but to do so for the sake of all people. ‘We are conquering you for your own benefit,’ said the Persians. Cyrus wanted the peoples he subjected to love him and to count themselves lucky to be Persian vassals. The most famous example of Cyrus’ innovative efforts to gain the approbation of a nation living under the thumb of his empire was his command that the Jewish exiles in Babylonia be allowed to return to their Judaean homeland and rebuild their temple. He even offered them financial assistance. Cyrus did not see himself as a Persian king ruling over Jews – he was also the king of the Jews, and thus responsible for their welfare.
The presumption to rule the entire world for the benefit of all its inhabitants was startling. Evolution has made Homo sapiens, like other social mammals, a xenophobic creature. Sapiens instinctively divide humanity into two parts, ‘we’ and ‘they’. We are people like you and me, who share our language, religion and customs. We are all responsible for each other, but not responsible for them. We were always distinct from them, and owe them nothing. We don’t want to see any of them in our territory, and we don’t care an iota what happens in their territory. They are barely even human. In the language of the Dinka people of the Sudan, ‘Dinka’ simply means ‘people’. People who are not Dinka are not people. The Dinka’s bitter enemies are the Nuer. What does the word Nuer mean in Nuer language? It means ‘original people’. Thousands of miles from the Sudan deserts, in the frozen ice-lands of Alaska and north-eastern Siberia, live the Yupiks. What does Yupik mean in Yupik language? It means ‘real people’.3
In contrast with this ethnic exclusiveness, imperial ideology from Cyrus onward has tended to be inclusive and all-encompassing. Even though it has often emphasised racial and cultural differences between rulers and ruled, it has still recognised the basic unity of the entire world, the existence of a single set of principles governing all places and times, and the mutual responsibilities of all human beings. Humankind is seen as a large family: the privileges of the parents go hand in hand with responsibility for the welfare of the children.
This new imperial vision passed from Cyrus and the Persians to Alexander the Great, and from him to Hellenistic kings, Roman emperors, Muslim caliphs, Indian dynasts, and eventually even to Soviet premiers and American presidents. This benevolent imperial vision has justified the existence of empires, and negated not only attempts by subject peoples to rebel, but also attempts by independent peoples to resist imperial expansion.
Similar imperial visions were developed independently of the Persian model in other parts of the world, most notably in Central America, in the Andean region, and in China. According to traditional Chinese political theory, Heaven (Tian) is the source of all legitimate authority on earth. Heaven chooses the most worthy person or family and gives them the Mandate of Heaven. This person or family then rules over All Under Heaven (Tianxia) for the benefit of all its inhabitants. Thus, a legitimate authority is – by definition – universal. If a ruler lacks the Mandate of Heaven, then he lacks legitimacy to rule even a single city. If a ruler enjoys the mandate, he is obliged to spread justice and harmony to the entire world. The Mandate of Heaven could not be given to several candidates simultaneously, and consequently one could not legitimise the existence of more than one independent state.
The first emperor of the united Chinese empire, Qín Shĭ Huángdì, boasted that ‘throughout the six directions [of the universe] everything belongs to the emperor . . . wherever there is a human footprint, there is not one who did not become a subject [of the emperor] . . . his kindness reaches even oxen and horses. There is not one who did not benefit. Every man is safe under his own roof.’4 In Chinese political thinking as well as Chinese historical memory, imperial periods were henceforth seen as golden ages of order and justice. In contradiction to the modern Western view that a just world is composed of separate nation states, in China periods of political fragmentation were seen as dark ages of chaos and injustice. This perception has had far-reaching implications for Chinese history. Every time an empire collapsed, the dominant political theory goaded the powers that be not to settle for paltry independent principalities, but to attempt reunification. Sooner or later these attempts always succeeded.
Empires have played a decisive part in amalgamating many small cultures into fewer big cultures. Ideas, people, goods and technology spread more easily within the borders of an empire than in a politically fragmented region. Often enough, it was the empires themselves which deliberately spread ideas, institutions, customs and norms. One reason was to make life easier for themselves. It is difficult to rule an empire in which every little district has its own set of laws, its own form of writing, its own language and its own money. Standardisation was a boon to emperors.
A second and equally important reason why empires actively spread a common culture was to gain legitimacy. At least since the days of Cyrus and Qín Shĭ Huángdì, empires have justified their actions – whether road-building or bloodshed – as necessary to spread a superior culture from which the conquered benefit even more than the conquerors.
The benefits were sometimes salient – law enforcement, urban planning, standardisation of weights and measures – and sometimes questionable – taxes, conscription, emperor worship. But most imperial elites earnestly believed that they were working for the general welfare of all the empire’s inhabitants. China’s ruling class treated their country’s neighbours and its foreign subjects as miserable barbarians to whom the empire must bring the benefits of culture. The Mandate of Heaven was bestowed upon the emperor not in order to exploit the world, but in order to educate humanity. The Romans, too, justified their dominion by arguing that they were endowing the barbarians with peace, justice and refinement. The wild Germans and painted Gauls had lived in squalor and ignorance until the Romans tamed them with law, cleaned them up in public bathhouses, and improved them with philosophy. The Mauryan Empire in the third century BC took as its mission the dissemination of Buddha’s teachings to an ignorant world. The Muslim caliphs received a divine mandate to spread the Prophet’s revelation, peacefully if possible but by the sword if necessary. The Spanish and Portuguese empires proclaimed that it was not riches they sought in the Indies and America, but converts to the true faith. The sun never set on the British mission to spread the twin gospels of liberalism and free trade. The Soviets felt duty-bound to facilitate the inexorable historical march from capitalism towards the utopian dictatorship of the proletariat. Many Americans nowadays maintain that their government has a moral imperative to bring Third World countries the benefits of democracy and human rights, even if these goods are delivered by cruise missiles and F-16s.
The cultural ideas spread by empire were seldom the exclusive creation of the ruling elite. Since the imperial vision tends to be universal and inclusive, it was relatively easy for imperial elites to adopt ideas, norms and traditions from wherever they found them, rather than to stick fanatically to a single hidebound tradition. While some emperors sought to purify their cultures and return to what they viewed as their roots, for the most part empires have begot hybrid civilisations that absorbed much from their subject peoples. The imperial culture of Rome was Greek almost as much as Roman. The imperial Abbasid culture was part Persian, part Greek, part Arab. Imperial Mongol culture was a Chinese copycat. In the imperial United States, an American president of Kenyan blood can munch on Italian pizza while watching his favourite film, Lawrence of Arabia, a British epic about the Arab rebellion against the Turks.
Not that this cultural melting pot made the process of cultural assimilation any easier for the vanquished. The imperial civilisation may well have absorbed numerous contributions from various conquered peoples, but the hybrid result was still alien to the vast majority. The process of assimilation was often painful and traumatic. It is not easy to give up a familiar and loved local tradition, just as it is difficult and stressful to understand and adopt a new culture. Worse still, even when subject peoples were successful in adopting the imperial culture, it could take decades, if not centuries, until the imperial elite accepted them as part of ‘us’. The generations between conquest and acceptance were left out in the cold. They had already lost their beloved local culture, but they were not allowed to take an equal part in the imperial world. On the contrary, their adopted culture continued to view them as barbarians.
Imagine an Iberian of good stock living a century after the fall of Numantia. He speaks his native Celtic dialect with his parents, but has acquired impeccable Latin, with only a slight accent, because he needs it to conduct his business and deal with the authorities. He indulges his wife’s penchant for elaborately ornate baubles, but is a bit embarrassed that she, like other local women, retains this relic of Celtic taste – he’d rather have her adopt the clean simplicity of the jewellery worn by the Roman governor’s wife. He himself wears Roman tunics and, thanks to his success as a cattle merchant, due in no small part to his expertise in the intricacies of Roman commercial law, he has been able to build a Roman-style villa. Yet, even though he can recite Book III of Virgil’s Georgics by heart, the Romans still treat him as though he’s semi-barbarian. He realises with frustration that he’ll never get a government appointment, or one of the really good seats in the amphitheatre.
In the late nineteenth century, many educated Indians were taught the same lesson by their British masters. One famous anecdote tells of an ambitious Indian who mastered the intricacies of the English language, took lessons in Western-style dance, and even became accustomed to eating with a knife and fork. Equipped with his new manners, he travelled to England, studied law at University College London, and became a qualified barrister. Yet this young man of law, bedecked in suit and tie, was thrown off a train in the British colony of South Africa for insisting on travelling first class instead of settling for third class, where ‘coloured’ men like him were supposed to ride. His name was Mohandas Karamchand Gandhi.
In some cases the processes of acculturation and assimilation eventually broke down the barriers between the newcomers and the old elite. The conquered no longer saw the empire as an alien system of occupation, and the conquerors came to view their subjects as equal to themselves. Rulers and ruled alike came to see ‘them’ as ‘us’. All the subjects of Rome eventually, after centuries of imperial rule, were granted Roman citizenship. Non-Romans rose to occupy the top ranks in the officer corps of the Roman legions and were appointed to the Senate. In AD 48 the emperor Claudius admitted to the Senate several Gallic notables, who, he noted in a speech, through ‘customs, culture, and the ties of marriage have blended with ourselves’. Snobbish senators protested introducing these former enemies into the heart of the Roman political system. Claudius reminded them of an inconvenient truth. Most of their own senatorial families descended from Italian tribes who once fought against Rome, and were later granted Roman citizenship. Indeed, the emperor reminded them, his own family was of Sabine ancestry.5
During the second century AD, Rome was ruled by a line of emperors born in Iberia, in whose veins probably flowed at least a few drops of local Iberian blood. The reigns of Trajan, Hadrian, Antoninius Pius and Marcus Aurelius are generally thought to constitute the empire’s golden age. After that, all the ethnic dams were let down. Emperor Septimius Severus (193–211) was the scion of a Punic family from Libya. Elagabalus (218–22) was a Syrian. Emperor Philip (244–9) was known colloquially as ‘Philip the Arab’. The empire’s new citizens adopted Roman imperial culture with such zest that, for centuries and even millennia after the empire itself collapsed, they continued to speak the empire’s language, to believe in the Christian God that the empire had adopted from one of its Levantine provinces, and to live by the empire’s laws.
A similar process occurred in the Arab Empire. When it was established in the mid-seventh century AD, it was based on a sharp division between the ruling Arab–Muslim elite and the subjugated Egyptians, Syrians, Iranians and Berbers, who were neither Arabs nor Muslim. Many of the empire’s subjects gradually adopted the Muslim faith, the Arabic language and a hybrid imperial culture. The old Arab elite looked upon these parvenus with deep hostility, fearing to lose its unique status and identity. The frustrated converts clamoured for an equal share within the empire and in the world of Islam. Eventually they got their way. Egyptians, Syrians and Mesopotamians were increasingly seen as ‘Arabs’. Arabs, in their turn – whether ‘authentic’ Arabs from Arabia or newly minted Arabs from Egypt and Syria – came to be increasingly dominated by non-Arab Muslims, in particular by Iranians, Turks and Berbers. The great success of the Arab imperial project was that the imperial culture it created was wholeheartedly adopted by numerous non-Arab people, who continued to uphold it, develop it and spread it – even after the original empire collapsed and the Arabs as an ethnic group lost their dominion.
In China the success of the imperial project was even more thorough. For more than 2,000 years, a welter of ethnic and cultural groups first termed barbarians were successfully integrated into imperial Chinese culture and became Han Chinese (so named after the Han Empire that ruled China from 206 BC to AD 220). The ultimate achievement of the Chinese Empire is that it is still alive and kicking, yet it is hard to see it as an empire except in outlying areas such as Tibet and Xinjiang. More than 90 percent of the population of China are seen by themselves and by others as Han.
We can understand the decolonisation process of the last few decades in a similar way. During the modern era Europeans conquered much of the globe under the guise of spreading a superior Western culture. They were so successful that billions of people gradually adopted significant parts of that culture. Indians, Africans, Arabs, Chinese and Maoris learned French, English and Spanish. They began to believe in human rights and the principle of self-determination, and they adopted Western ideologies such as liberalism, capitalism, Communism, feminism and nationalism.
During the twentieth century, local groups that had adopted Western values claimed equality with their European conquerors in the name of these very values. Many anti-colonial struggles were waged under the banners of self-determination, socialism and human rights, all of which are Western legacies. Just as Egyptians, Iranians and Turks adopted and adapted the imperial culture that they inherited from the original Arab conquerors, so today’s Indians, Africans and Chinese have accepted much of the imperial culture of their former Western overlords, while seeking to mould it in accordance with their needs and traditions.
It is tempting to divide history neatly into good guys and bad guys, with all empires among the bad guys. For the vast majority of empires were founded on blood, and maintained their power through oppression and war. Yet most of today’s cultures are based on imperial legacies. If empires are by definition bad, what does that say about us?
There are schools of thought and political movements that seek to purge human culture of imperialism, leaving behind what they claim is a pure, authentic civilisation, untainted by sin. These ideologies are at best naïve; at worst they serve as disingenuous window-dressing for crude nationalism and bigotry. Perhaps you could make a case that some of the myriad cultures that emerged at the dawn of recorded history were pure, untouched by sin and unadulterated by other societies. But no culture since that dawn can reasonably make that claim, certainly no culture that exists now on earth. All human cultures are at least in part the legacy of empires and imperial civilisations, and no academic or political surgery can cut out the imperial legacies without killing the patient.
Think, for example, about the love–hate relationship between the independent Indian republic of today and the British Raj. The British conquest and occupation of India cost the lives of millions of Indians, and was responsible for the continuous humiliation and exploitation of hundreds of millions more. Yet many Indians adopted, with the zest of converts, Western ideas such as self-determination and human rights, and were dismayed when the British refused to live up to their own declared values by granting native Indians either equal rights as British subjects or independence.
Nevertheless, the modern Indian state is a child of the British Empire. The British killed, injured and persecuted the inhabitants of the subcontinent, but they also united a bewildering mosaic of warring kingdoms, principalities and tribes, creating a shared national consciousness and a country that functioned more or less as a single political unit. They laid the foundations of the Indian judicial system, created its administrative structure, and built the railroad network that was critical for economic integration. Independent India adopted Western democracy, in its British incarnation, as its form of government. English is still the subcontinent’s lingua franca, a neutral tongue that native speakers of Hindi, Tamil and Malayalam can use to communicate. Indians are passionate cricket players and chai (tea) drinkers, and both game and beverage are British legacies. Commercial tea farming did not exist in India until the mid-nineteenth century, when it was introduced by the British East India Company. It was the snobbish British sahibs who spread the custom of tea drinking throughout the subcontinent.
28. The Chhatrapati Shivaji train station in Mumbai. It began its life as Victoria Station, Bombay. The British built it in the Neo-Gothic style that was popular in late nineteenth-century Britain. A Hindu nationalist government changed the names of both city and station, but showed no appetite for razing such a magnificent building, even if it was built by foreign oppressors.
{© Stuart Black/Robert Harding World Imagery/Getty Images.}
How many Indians today would want to call a vote to divest themselves of democracy, English, the railway network, the legal system, cricket and tea on the grounds that they are imperial legacies? And if they did, wouldn’t the very act of calling a vote to decide the issue demonstrate their debt to their former overlords?
29. The Taj Mahal. An example of ‘authentic’ Indian culture, or the alien creation of Muslim imperialism?
{© The Art Archive/Gianni Dagli Orti (ref: AA423796).}
Even if we were to completely disavow the legacy of a brutal empire in the hope of reconstructing and safeguarding the ‘authentic’ cultures that preceded it, in all probability what we will be defending is nothing but the legacy of an older and no less brutal empire. Those who resent the mutilation of Indian culture by the British Raj inadvertently sanctify the legacies of the Mughal Empire and the conquering sultanate of Delhi. And whoever attempts to rescue ‘authentic Indian culture’ from the alien influences of these Muslim empires sanctifies the legacies of the Gupta Empire, the Kushan Empire and the Maurya Empire. If an extreme Hindu nationalist were to destroy all the buildings left by the British conquerors, such as Mumbai’s main train station, what about the structures left by India’s Muslim conquerors, such as the Taj Mahal?
Nobody really knows how to solve this thorny question of cultural inheritance. Whatever path we take, the first step is to acknowledge the complexity of the dilemma and to accept that simplistically dividing the past into good guys and bad guys leads nowhere. Unless, of course, we are willing to admit that we usually follow the lead of the bad guys.
Since around 200 BC, most humans have lived in empires. It seems likely that in the future, too, most humans will live in one. But this time the empire will be truly global. The imperial vision of dominion over the entire world could be imminent.
As the twenty-first century unfolds, nationalism is fast losing ground. More and more people believe that all of humankind is the legitimate source of political authority, rather than the members of a particular nationality, and that safeguarding human rights and protecting the interests of the entire human species should be the guiding light of politics. If so, having close to 200 independent states is a hindrance rather than a help. Since Swedes, Indonesians and Nigerians deserve the same human rights, wouldn’t it be simpler for a single global government to safeguard them?
The appearance of essentially global problems, such as melting ice caps, nibbles away at whatever legitimacy remains to the independent nation states. No sovereign state will be able to overcome global warming on its own. The Chinese Mandate of Heaven was given by Heaven to solve the problems of humankind. The modern Mandate of Heaven will be given by humankind to solve the problems of heaven, such as the hole in the ozone layer and the accumulation of green-house gases. The colour of the global empire may well be green.
As of 2014, the world is still politically fragmented, but states are fast losing their independence. Not one of them is really able to execute independent economic policies, to declare and wage wars as it pleases, or even to run its own internal affairs as it sees fit. States are increasingly open to the machinations of global markets, to the interference of global companies and NGOs, and to the supervision of global public opinion and the international judicial system. States are obliged to conform to global standards of financial behaviour, environmental policy and justice. Immensely powerful currents of capital, labour and information turn and shape the world, with a growing disregard for the borders and opinions of states.
The global empire being forged before our eyes is not governed by any particular state or ethnic group. Much like the Late Roman Empire, it is ruled by a multi-ethnic elite, and is held together by a common culture and common interests. Throughout the world, more and more entrepreneurs, engineers, experts, scholars, lawyers and managers are called to join the empire. They must ponder whether to answer the imperial call or to remain loyal to their state and their people. More and more choose the empire.
IN THE MEDIEVAL MARKET IN SAMARKAND, a city built on a Central Asian oasis, Syrian merchants ran their hands over fine Chinese silks, fierce tribesmen from the steppes displayed the latest batch of straw-haired slaves from the far west, and shopkeepers pocketed shiny gold coins imprinted with exotic scripts and the profiles of unfamiliar kings. Here, at one of that era’s major crossroads between east and west, north and south, the unification of humankind was an everyday fact. The same process could be observed at work when Kublai Khan’s army mustered to invade Japan in 1281. Mongol cavalrymen in skins and furs rubbed shoulders with Chinese foot soldiers in bamboo hats, drunken Korean auxiliaries picked fights with tattooed sailors from the South China Sea, engineers from Central Asia listened with dropping jaws to the tall tales of European adventurers, and all obeyed the command of a single emperor.
Meanwhile, around the holy Ka’aba in Mecca, human unification was proceeding by other means. Had you been a pilgrim to Mecca, circling Islam’s holiest shrine in the year 1300, you might have found yourself in the company of a party from Mesopotamia, their robes floating in the wind, their eyes blazing with ecstasy, and their mouths repeating one after the other the ninety-nine names of God. Just ahead you might have seen a weather-beaten Turkish patriarch from the Asian steppes, hobbling on a stick and stroking his beard thoughtfully. To one side, gold jewellery shining against jet-black skin, might have been a group of Muslims from the African kingdom of Mali. The aroma of clove, turmeric, cardamom and sea salt would have signalled the presence of brothers from India, or perhaps from the mysterious spice islands further east.
Today religion is often considered a source of discrimination, disagreement and disunion. Yet, in fact, religion has been the third great unifier of humankind, alongside money and empires. Since all social orders and hierarchies are imagined, they are all fragile, and the larger the society, the more fragile it is. The crucial historical role of religion has been to give superhuman legitimacy to these fragile structures. Religions assert that our laws are not the result of human caprice, but are ordained by an absolute and supreme authority. This helps place at least some fundamental laws beyond challenge, thereby ensuring social stability.
Religion can thus be defined as a system of human norms and values that is founded on a belief in a superhuman order. This involves two distinct criteria:
1. Religions hold that there is a superhuman order, which is not the product of human whims or agreements. Professional football is not a religion, because despite its many laws, rites and often bizarre rituals, everyone knows that human beings invented football themselves, and FIFA may at any moment enlarge the size of the goal or cancel the offside rule.
2. Based on this superhuman order, religion establishes norms and values that it considers binding. Many Westerners today believe in ghosts, fairies and reincarnation, but these beliefs are not a source of moral and behavioural standards. As such, they do not constitute a religion.
Despite their ability to legitimise widespread social and political orders, not all religions have actuated this potential. In order to unite under its aegis a large expanse of territory inhabited by disparate groups of human beings, a religion must possess two further qualities. First, it must espouse a universal superhuman order that is true always and everywhere. Second, it must insist on spreading this belief to everyone. In other words, it must be universal and missionary.
The best-known religions of history, such as Islam and Buddhism, are universal and missionary. Consequently people tend to believe that all religions are like them. In fact, the majority of ancient religions were local and exclusive. Their followers believed in local deities and spirits, and had no interest in converting the entire human race. As far as we know, universal and missionary religions began to appear only in the first millennium BC. Their emergence was one of the most important revolutions in history, and made a vital contribution to the unification of humankind, much like the emergence of universal empires and universal money.
When animism was the dominant belief system, human norms and values had to take into consideration the outlook and interests of a multitude of other beings, such as animals, plants, fairies and ghosts. For example, a forager band in the Ganges Valley may have established a rule forbidding people to cut down a particularly large fig tree, lest the fig-tree spirit become angry and take revenge. Another forager band living in the Indus Valley may have forbidden people from hunting white-tailed foxes, because a white-tailed fox once revealed to a wise old woman where the band might find precious obsidian.
Such religions tended to be very local in outlook, and to emphasise the unique features of specific locations, climates and phenomena. Most foragers spent their entire lives within an area of no more than a thousand square miles. In order to survive, the inhabitants of a particular valley needed to understand the super-human order that regulated their valley, and to adjust their behaviour accordingly. It was pointless to try to convince the inhabitants of some distant valley to follow the same rules. The people of the Indus did not bother to send missionaries to the Ganges to convince locals not to hunt white-tailed foxes.
The Agricultural Revolution seems to have been accompanied by a religious revolution. Hunter-gatherers picked and pursued wild plants and animals, which could be seen as equal in status to Homo sapiens. The fact that man hunted sheep did not make sheep inferior to man, just as the fact that tigers hunted man did not make man inferior to tigers. Beings communicated with one another directly and negotiated the rules governing their shared habitat. In contrast, farmers owned and manipulated plants and animals, and could hardly degrade themselves by negotiating with their possessions. Hence the first religious effect of the Agricultural Revolution was to turn plants and animals from equal members of a spiritual round table into property.
This, however, created a big problem. Farmers may have desired absolute control of their sheep, but they knew perfectly well that their control was limited. They could lock the sheep in pens, castrate rams and selectively breed ewes, yet they could not ensure that the ewes conceived and gave birth to healthy lambs, nor could they prevent the eruption of deadly epidemics. How then to safeguard the fecundity of the flocks?
A leading theory about the origin of the gods argues that gods gained importance because they offered a solution to this problem. Gods such as the fertility goddess, the sky god and the god of medicine took centre stage when plants and animals lost their ability to speak, and the gods’ main role was to mediate between humans and the mute plants and animals. Much of ancient mythology is in fact a legal contract in which humans promise everlasting devotion to the gods in exchange for mastery over plants and animals – the first chapters of the book of Genesis are a prime example. For thousands of years after the Agricultural Revolution, religious liturgy consisted mainly of humans sacrificing lambs, wine and cakes to divine powers, who in exchange promised abundant harvests and fecund flocks.
The Agricultural Revolution initially had a far smaller impact on the status of other members of the animist system, such as rocks, springs, ghosts and demons. However, these too gradually lost status in favour of the new gods. As long as people lived their entire lives within limited territories of a few hundred square miles, most of their needs could be met by local spirits. But once kingdoms and trade networks expanded, people needed to contact entities whose power and authority encompassed a whole kingdom or an entire trade basin.
The attempt to answer these needs led to the appearance of polytheistic religions (from the Greek: poly = many, theos = god). These religions understood the world to be controlled by a group of powerful gods, such as the fertility goddess, the rain god and the war god. Humans could appeal to these gods and the gods might, if they received devotions and sacrifices, deign to bring rain, victory and health.
Animism did not entirely disappear at the advent of polytheism. Demons, fairies, ghosts, holy rocks, holy springs and holy trees remained an integral part of almost all polytheist religions. These spirits were far less important than the great gods, but for the mundane needs of many ordinary people, they were good enough. While the king in his capital city sacrificed dozens of fat rams to the great war god, praying for victory over the barbarians, the peasant in his hut lit a candle to the fig-tree fairy, praying that she help cure his sick son.
Yet the greatest impact of the rise of great gods was not on sheep or demons, but upon the status of Homo sapiens. Animists thought that humans were just one of many creatures inhabiting the world. Polytheists, on the other hand, increasingly saw the world as a reflection of the relationship between gods and humans. Our prayers, our sacrifices, our sins and our good deeds determined the fate of the entire ecosystem. A terrible flood might wipe out billions of ants, grasshoppers, turtles, antelopes, giraffes and elephants, just because a few stupid Sapiens made the gods angry. Polytheism thereby exalted not only the status of the gods, but also that of humankind. Less fortunate members of the old animist system lost their stature and became either extras or silent decor in the great drama of man’s relationship with the gods.
Two thousand years of monotheistic brainwashing have caused most Westerners to see polytheism as ignorant and childish idolatry. This is an unjust stereotype. In order to understand the inner logic of polytheism, it is necessary to grasp the central idea buttressing the belief in many gods.
Polytheism does not necessarily dispute the existence of a single power or law governing the entire universe. In fact, most polytheist and even animist religions recognised such a supreme power that stands behind all the different gods, demons and holy rocks. In classical Greek polytheism, Zeus, Hera, Apollo and their colleagues were subject to an omnipotent and all-encompassing power – Fate (Moira, Ananke). Nordic gods, too, were in thrall to fate, which doomed them to perish in the cataclysm of Ragnarök (the Twilight of the Gods). In the polytheistic religion of the Yoruba of West Africa, all gods were born of the supreme god Olodumare, and remained subject to him. In Hindu polytheism, a single principle, Atman, controls the myriad gods and spirits, humankind, and the biological and physical world. Atman is the eternal essence or soul of the entire universe, as well as of every individual and every phenomenon.
The fundamental insight of polytheism, which distinguishes it from monotheism, is that the supreme power governing the world is devoid of interests and biases, and therefore it is unconcerned with the mundane desires, cares and worries of humans. It’s pointless to ask this power for victory in war, for health or for rain, because from its all-encompassing vantage point, it makes no difference whether a particular kingdom wins or loses, whether a particular city prospers or withers, whether a particular person recuperates or dies. The Greeks did not waste any sacrifices on Fate, and Hindus built no temples to Atman.
The only reason to approach the supreme power of the universe would be to renounce all desires and embrace the bad along with the good – to embrace even defeat, poverty, sickness and death. Thus some Hindus, known as Sadhus or Sannyasis, devote their lives to uniting with Atman, thereby achieving enlightenment. They strive to see the world from the viewpoint of this fundamental principle, to realise that from its eternal perspective all mundane desires and fears are meaningless and ephemeral phenomena.
Most Hindus, however, are not Sadhus. They are sunk deep in the morass of mundane concerns, where Atman is not much help. For assistance in such matters, Hindus approach the gods with their partial powers. Precisely because their powers are partial rather than all-encompassing, gods such as Ganesha, Lakshmi and Saraswati have interests and biases. Humans can therefore make deals with these partial powers and rely on their help in order to win wars and recuperate from illness. There are necessarily many of these smaller powers, since once you start dividing up the all-encompassing power of a supreme principle, you’ll inevitably end up with more than one deity. Hence the plurality of gods.
The insight of polytheism is conducive to far-reaching religious tolerance. Since polytheists believe, on the one hand, in one supreme and completely disinterested power, and on the other hand in many partial and biased powers, there is no difficulty for the devotees of one god to accept the existence and efficacy of other gods. Polytheism is inherently open-minded, and rarely persecutes ‘heretics’ and ‘infidels’.
Even when polytheists conquered huge empires, they did not try to convert their subjects. The Egyptians, the Romans and the Aztecs did not send missionaries to foreign lands to spread the worship of Osiris, Jupiter or Huitzilopochtli (the chief Aztec god), and they certainly didn’t dispatch armies for that purpose. Subject peoples throughout the empire were expected to respect the empire’s gods and rituals, since these gods and rituals protected and legitimised the empire. Yet they were not required to give up their local gods and rituals. In the Aztec Empire, subject peoples were obliged to build temples for Huitzilopochtli, but these temples were built alongside those of local gods, rather than in their stead. In many cases the imperial elite itself adopted the gods and rituals of subject people. The Romans happily added the Asian goddess Cybele and the Egyptian goddess Isis to their pantheon.
The only god that the Romans long refused to tolerate was the monotheistic and evangelising god of the Christians. The Roman Empire did not require the Christians to give up their beliefs and rituals, but it did expect them to pay respect to the empire’s protector gods and to the divinity of the emperor. This was seen as a declaration of political loyalty. When the Christians vehemently refused to do so, and went on to reject all attempts at compromise, the Romans reacted by persecuting what they understood to be a politically subversive faction. And even this was done half-heartedly. In the 300 years from the crucifixion of Christ to the conversion of Emperor Constantine, polytheistic Roman emperors initiated no more than four general persecutions of Christians. Local administrators and governors incited some anti-Christian violence of their own. Still, if we combine all the victims of all these persecutions, it turns out that in these three centuries, the polytheistic Romans killed no more than a few thousand Christians.1 In contrast, over the course of the next 1,500 years, Christians slaughtered Christians by the millions to defend slightly different interpretations of the religion of love and compassion.
The religious wars between Catholics and Protestants that swept Europe in the sixteenth and seventeenth centuries are particularly notorious. All those involved accepted Christ’s divinity and His gospel of compassion and love. However, they disagreed about the nature of this love. Protestants believed that the divine love is so great that God was incarnated in flesh and allowed Himself to be tortured and crucified, thereby redeeming the original sin and opening the gates of heaven to all those who professed faith in Him. Catholics maintained that faith, while essential, was not enough. To enter heaven, believers had to participate in church rituals and do good deeds. Protestants refused to accept this, arguing that this quid pro quo belittles God’s greatness and love. Whoever thinks that entry to heaven depends upon his or her own good deeds magnifies his own importance, and implies that Christ’s suffering on the cross and God’s love for humankind are not enough.
These theological disputes turned so violent that during the sixteenth and seventeenth centuries, Catholics and Protestants killed each other by the hundreds of thousands. On 23 August 1572, French Catholics who stressed the importance of good deeds attacked communities of French Protestants who highlighted God’s love for humankind. In this attack, the St Bartholomew’s Day Massacre, between 5,000 and 10,000 Protestants were slaughtered in less than twenty-four hours. When the pope in Rome heard the news from France, he was so overcome by joy that he organised festive prayers to celebrate the occasion and commissioned Giorgio Vasari to decorate one of the Vatican’s rooms with a fresco of the massacre (the room is currently off-limits to visitors).2 More Christians were killed by fellow Christians in those twenty-four hours than by the polytheistic Roman Empire throughout its entire existence.
With time some followers of polytheist gods became so fond of their particular patron that they drifted away from the basic polytheist insight. They began to believe that their god was the only god, and that He was in fact the supreme power of the universe. Yet at the same time they continued to view Him as possessing interests and biases, and believed that they could strike deals with Him. Thus were born monotheist religions, whose followers beseech the supreme power of the universe to help them recover from illness, win the lottery and gain victory in war.
The first monotheist religion known to us appeared in Egypt, c.1350 BC, when Pharaoh Akhenaten declared that one of the minor deities of the Egyptian pantheon, the god Aten, was, in fact, the supreme power ruling the universe. Akhenaten institutionalised the worship of Aten as the state religion and tried to check the worship of all other gods. His religious revolution, however, was unsuccessful. After his death, the worship of Aten was abandoned in favour of the old pantheon.
Polytheism continued to give birth here and there to other monotheist religions, but they remained marginal, not least because they failed to digest their own universal message. Judaism, for example, argued that the supreme power of the universe has interests and biases, yet His chief interest is in the tiny Jewish nation and in the obscure land of Israel. Judaism had little to offer other nations, and throughout most of its existence it has not been a missionary religion. This stage can be called the stage of ‘local monotheism’.
The big breakthrough came with Christianity. This faith began as an esoteric Jewish sect that sought to convince Jews that Jesus of Nazareth was their long-awaited messiah. However, one of the sect’s first leaders, Paul of Tarsus, reasoned that if the supreme power of the universe has interests and biases, and if He had bothered to incarnate Himself in the flesh and to die on the cross for the salvation of humankind, then this is something everyone should hear about, not just Jews. It was thus necessary to spread the good word – the gospel – about Jesus throughout the world.
Paul’s arguments fell on fertile ground. Christians began organising widespread missionary activities aimed at all humans. In one of history’s strangest twists, this esoteric Jewish sect took over the mighty Roman Empire.
Christian success served as a model for another monotheist religion that appeared in the Arabian peninsula in the seventh century – Islam. Like Christianity, Islam, too, began as a small sect in a remote corner of the world, but in an even stranger and swifter historical surprise it managed to break out of the deserts of Arabia and conquer an immense empire stretching from the Atlantic Ocean to India. Henceforth, the monotheist idea played a central role in world history.
Monotheists have tended to be far more fanatical and missionary than polytheists. A religion that recognises the legitimacy of other faiths implies either that its god is not the supreme power of the universe, or that it received from God just part of the universal truth. Since monotheists have usually believed that they are in possession of the entire message of the one and only God, they have been compelled to discredit all other religions. Over the last two millennia, monotheists repeatedly tried to strengthen their hand by violently exterminating all competition.
It worked. At the beginning of the first century AD, there were hardly any monotheists in the world. Around AD 500, one of the world’s largest empires – the Roman Empire – was a Christian polity, and missionaries were busy spreading Christianity to other parts of Europe, Asia and Africa. By the end of the first millennium AD, most people in Europe, West Asia and North Africa were monotheists, and empires from the Atlantic Ocean to the Himalayas claimed to be ordained by the single great God. By the early sixteenth century, monotheism dominated most of Afro-Asia, with the exception of East Asia and the southern parts of Africa, and it began extending long tentacles towards South Africa, America and Oceania. Today most people outside East Asia adhere to one monotheist religion or another, and the global political order is built on monotheistic foundations.
Yet just as animism continued to survive within polytheism, so polytheism continued to survive within monotheism. In theory, once a person believes that the supreme power of the universe has interests and biases, what’s the point in worshipping partial powers? Who would want to approach a lowly bureaucrat when the president’s office is open to you? Indeed, monotheist theology tends to deny the existence of all gods except the supreme God, and to pour hellfire and brimstone over anyone who dares worship them.
Map 5. The Spread of Christianity and Islam.
{Maps by Neil Gower}
Yet there has always been a chasm between theological theories and historical realities. Most people have found it difficult to digest the monotheist idea fully. They have continued to divide the world into ‘we’ and ‘they’, and to see the supreme power of the universe as too distant and alien for their mundane needs. The monotheist religions expelled the gods through the front door with a lot of fanfare, only to take them back in through the side window. Christianity, for example, developed its own pantheon of saints, whose cults differed little from those of the polytheistic gods.
Just as the god Jupiter defended Rome and Huitzilopochtli protected the Aztec Empire, so every Christian kingdom had its own patron saint who helped it overcome difficulties and win wars. England was protected by St George, Scotland by St Andrew, Hungary by St Stephen, and France had St Martin. Cities and towns, professions, and even diseases – each had their own saint. The city of Milan had St Ambrose, while St Mark watched over Venice. St Florian protected chimney cleaners, whereas St Mathew lent a hand to tax collectors in distress. If you suffered from headaches you had to pray to St Agathius, but if from toothaches, then St Apollonia was a much better audience.
The Christian saints did not merely resemble the old polytheistic gods. Often they were these very same gods in disguise. For example, the chief goddess of Celtic Ireland prior to the coming of Christianity was Brigid. When Ireland was Christianised, Brigid too was baptised. She became St Brigit, who to this day is the most revered saint in Catholic Ireland.
Polytheism gave birth not merely to monotheist religions, but also to dualistic ones. Dualistic religions espouse the existence of two opposing powers: good and evil. Unlike monotheism, dualism believes that evil is an independent power, neither created by the good God, nor subordinate to it. Dualism explains that the entire universe is a battleground between these two forces, and that everything that happens in the world is part of the struggle.
Dualism is a very attractive world view because it has a short and simple answer to the famous Problem of Evil, one of the fundamental concerns of human thought. ‘Why is there evil in the world? Why is there suffering? Why do bad things happen to good people?’ Monotheists have to practise intellectual gymnastics to explain how an all-knowing, all-powerful and perfectly good God allows so much suffering in the world. One well-known explanation is that this is God’s way of allowing for human free will. Were there no evil, humans could not choose between good and evil, and hence there would be no free will. This, however, is a non-intuitive answer that immediately raises a host of new questions. Freedom of will allows humans to choose evil. Many indeed choose evil and, according to the standard monotheist account, this choice must bring divine punishment in its wake. If God knew in advance that a particular person would use her free will to choose evil, and that as a result she would be punished for this by eternal tortures in hell, why did God create her? Theologians have written countless books to answer such questions. Some find the answers convincing. Some don’t. What’s undeniable is that monotheists have a hard time dealing with the Problem of Evil.
For dualists, it’s easy to explain evil. Bad things happen even to good people because the world is not governed single-handedly by a good God. There is an independent evil power loose in the world. The evil power does bad things.
Dualism has its own drawbacks. While solving the Problem of Evil, it is unnerved by the Problem of Order. If the world was created by a single God, it’s clear why it is such an orderly place, where everything obeys the same laws. But if Good and Evil battle for control of the world, who enforces the laws governing this cosmic war? Two rival states can fight one another because both obey the same laws of physics. A missile launched from Pakistan can hit targets in India because gravity works the same way in both countries. When Good and Evil fight, what common laws do they obey, and who decreed these laws?
So, monotheism explains order, but is mystified by evil. Dualism explains evil, but is puzzled by order. There is one logical way of solving the riddle: to argue that there is a single omnipotent God who created the entire universe – and He’s evil. But nobody in history has had the stomach for such a belief.
Dualistic religions flourished for more than a thousand years. Sometime between 1500 BC and 1000 BC a prophet named Zoroaster (Zarathustra) was active somewhere in Central Asia. His creed passed from generation to generation until it became the most important of dualistic religions – Zoroastrianism. Zoroastrians saw the world as a cosmic battle between the good god Ahura Mazda and the evil god Angra Mainyu. Humans had to help the good god in this battle. Zoroastrianism was an important religion during the Achaemenid Persian Empire (550–330 BC) and later became the official religion of the Sassanid Persian Empire (AD 224–651). It exerted a major influence on almost all subsequent Middle Eastern and Central Asian religions, and it inspired a number of other dualist religions, such as Gnosticism and Manichaeanism.
During the third and fourth centuries AD, the Manichaean creed spread from China to North Africa, and for a moment it appeared that it would beat Christianity to achieve dominance in the Roman Empire. Yet the Manichaeans lost the soul of Rome to the Christians, the Zoroastrian Sassanid Empire was overrun by the monotheistic Muslims, and the dualist wave subsided. Today only a handful of dualist communities survive in India and the Middle East.
Nevertheless, the rising tide of monotheism did not really wipe out dualism. Jewish, Christian and Muslim monotheism absorbed numerous dualist beliefs and practices, and some of the most basic ideas of what we call ‘monotheism’ are, in fact, dualist in origin and spirit. Countless Christians, Muslims and Jews believe in a powerful evil force – like the one Christians call the Devil or Satan – who can act independently, fight against the good God, and wreak havoc without God’s permission.
How can a monotheist adhere to such a dualistic belief (which, by the way, is nowhere to be found in the Old Testament)? Logically, it is impossible. Either you believe in a single omnipotent God or you believe in two opposing powers, neither of which is omnipotent. Still, humans have a wonderful capacity to believe in contradictions. So it should not come as a surprise that millions of pious Christians, Muslims and Jews manage to believe at one and the same time in an omnipotent God and an independent Devil. Countless Christians, Muslims and Jews have gone so far as to imagine that the good God even needs our help in its struggle against the Devil, which inspired among other things the call for jihads and crusades.
Another key dualistic concept, particularly in Gnosticism and Manichaeanism, was the sharp distinction between body and soul, between matter and spirit. Gnostics and Manichaeans argued that the good god created the spirit and the soul, whereas matter and bodies are the creation of the evil god. Man, according to this view, serves as a battleground between the good soul and the evil body. From a monotheistic perspective, this is nonsense – why distinguish so sharply between body and soul, or matter and spirit? And why argue that body and matter are evil? After all, everything was created by the same good God. But monotheists could not help but be captivated by dualist dichotomies, precisely because they helped them address the problem of evil. So such oppositions eventually became cornerstones of Christian and Muslim thought. Belief in heaven (the realm of the good god) and hell (the realm of the evil god) was also dualist in origin. There is no trace of this belief in the Old Testament, which also never claims that the souls of people continue to live after the death of the body.
In fact, monotheism, as it has played out in history, is a kaleidoscope of monotheist, dualist, polytheist and animist legacies, jumbling together under a single divine umbrella. The average Christian believes in the monotheist God, but also in the dualist Devil, in polytheist saints, and in animist ghosts. Scholars of religion have a name for this simultaneous avowal of different and even contradictory ideas and the combination of rituals and practices taken from different sources. It’s called syncretism. Syncretism might, in fact, be the single great world religion.
All the religions we have discussed so far share one important characteristic: they all focus on a belief in gods and other supernatural entities. This seems obvious to Westerners, who are familiar mainly with monotheistic and polytheist creeds. In fact, however, the religious history of the world does not boil down to the history of gods. During the first millennium BC, religions of an altogether new kind began to spread through Afro-Asia. The newcomers, such as Jainism and Buddhism in India, Daoism and Confucianism in China, and Stoicism, Cynicism and Epicureanism in the Mediterranean basin, were characterised by their disregard of gods.
These creeds maintained that the superhuman order governing the world is the product of natural laws rather than of divine wills and whims. Some of these natural-law religions continued to espouse the existence of gods, but their gods were subject to the laws of nature no less than humans, animals and plants were. Gods had their niche in the ecosystem, just as elephants and porcupines had theirs, but could no more change the laws of nature than elephants can. A prime example is Buddhism, the most important of the ancient natural law religions, which remains one of the major faiths.
The central figure of Buddhism is not a god but a human being, Siddhartha Gautama. According to Buddhist tradition, Gautama was heir to a small Himalayan kingdom, sometime around 500 BC. The young prince was deeply affected by the suffering evident all around him. He saw that men and women, children and old people, all suffer not just from occasional calamities such as war and plague, but also from anxiety, frustration and discontent, all of which seem to be an inseparable part of the human condition. People pursue wealth and power, acquire knowledge and possessions, beget sons and daughters, and build houses and palaces. Yet no matter what they achieve, they are never content. Those who live in poverty dream of riches. Those who have a million want two million. Those who have two million want 10 million. Even the rich and famous are rarely satisfied. They too are haunted by ceaseless cares and worries, until sickness, old age and death put a bitter end to them. Everything that one has accumulated vanishes like smoke. Life is a pointless rat race. But how to escape it?
At the age of twenty-nine Gautama slipped away from his palace in the middle of the night, leaving behind his family and possessions. He travelled as a homeless vagabond throughout northern India, searching for a way out of suffering. He visited ashrams and sat at the feet of gurus but nothing liberated him entirely – some dissatisfaction always remained. He did not despair. He resolved to investigate suffering on his own until he found a method for complete liberation. He spent six years meditating on the essence, causes and cures for human anguish. In the end he came to the realisation that suffering is not caused by ill fortune, by social injustice, or by divine whims. Rather, suffering is caused by the behaviour patterns of one’s own mind.
Gautama’s insight was that no matter what the mind experiences, it usually reacts with craving, and craving always involves dissatisfaction. When the mind experiences something distasteful it craves to be rid of the irritation. When the mind experiences something pleasant, it craves that the pleasure will remain and will intensify. Therefore, the mind is always dissatisfied and restless. This is very clear when we experience unpleasant things, such as pain. As long as the pain continues, we are dissatisfied and do all we can to avoid it. Yet even when we experience pleasant things we are never content. We either fear that the pleasure might disappear, or we hope that it will intensify. People dream for years about finding love but are rarely satisfied when they find it. Some become anxious that their partner will leave; others feel that they have settled cheaply, and could have found someone better. And we all know people who manage to do both.
Map 6. The Spread of Buddhism.
{Maps by Neil Gower}
Great gods can send us rain, social institutions can provide justice and good health care, and lucky coincidences can turn us into millionaires, but none of them can change our basic mental patterns. Hence even the greatest kings are doomed to live in angst, constantly fleeing grief and anguish, forever chasing after greater pleasures.
Gautama found that there was a way to exit this vicious circle. If, when the mind experiences something pleasant or unpleasant, it simply understands things as they are, then there is no suffering. If you experience sadness without craving that the sadness go away, you continue to feel sadness but you do not suffer from it. There can actually be richness in the sadness. If you experience joy without craving that the joy linger and intensify, you continue to feel joy without losing your peace of mind.
But how do you get the mind to accept things as they are, without craving? To accept sadness as sadness, joy as joy, pain as pain? Gautama developed a set of meditation techniques that train the mind to experience reality as it is, without craving. These practices train the mind to focus all its attention on the question, ‘What am I experiencing now?’ rather than on ‘What would I rather be experiencing?’ It is difficult to achieve this state of mind, but not impossible.
Gautama grounded these meditation techniques in a set of ethical rules meant to make it easier for people to focus on actual experience and to avoid falling into cravings and fantasies. He instructed his followers to avoid killing, promiscuous sex and theft, since such acts necessarily stoke the fire of craving (for power, for sensual pleasure, or for wealth). When the flames are completely extinguished, craving is replaced by a state of perfect contentment and serenity, known as nirvana (the literal meaning of which is ‘extinguishing the fire’). Those who have attained nirvana are fully liberated from all suffering. They experience reality with the utmost clarity, free of fantasies and delusions. While they will most likely still encounter unpleasantness and pain, such experiences cause them no misery. A person who does not crave cannot suffer.
According to Buddhist tradition, Gautama himself attained nirvana and was fully liberated from suffering. Henceforth he was known as ‘Buddha’, which means ‘The Enlightened One’. Buddha spent the rest of his life explaining his discoveries to others so that everyone could be freed from suffering. He encapsulated his teachings in a single law: suffering arises from craving; the only way to be fully liberated from suffering is to be fully liberated from craving; and the only way to be liberated from craving is to train the mind to experience reality as it is.
This law, known as dharma or dhamma, is seen by Buddhists as a universal law of nature. That ‘suffering arises from craving’ is always and everywhere true, just as in modern physics E always equals mc². Buddhists are people who believe in this law and make it the fulcrum of all their activities. Belief in gods, on the other hand, is of minor importance to them. The first principle of monotheist religions is ‘God exists. What does He want from me?’ The first principle of Buddhism is ‘Suffering exists. How do I escape it?’
Buddhism does not deny the existence of gods – they are described as powerful beings who can bring rains and victories – but they have no influence on the law that suffering arises from craving. If the mind of a person is free of all craving, no god can make him miserable. Conversely, once craving arises in a person’s mind, all the gods in the universe cannot save him from suffering.
Yet much like the monotheist religions, premodern natural-law religions such as Buddhism never really rid themselves of the worship of gods. Buddhism told people that they should aim for the ultimate goal of complete liberation from suffering, rather than for stops along the way such as economic prosperity and political power. However, 99 per cent of Buddhists did not attain nirvana, and even if they hoped to do so in some future lifetime, they devoted most of their present lives to the pursuit of mundane achievements. So they continued to worship various gods, such as the Hindu gods in India, the Bon gods in Tibet, and the Shinto gods in Japan.
Moreover, as time went by several Buddhist sects developed pantheons of Buddhas and bodhisattvas. These are human and non-human beings with the capacity to achieve full liberation from suffering but who forego this liberation out of compassion, in order to help the countless beings still trapped in the cycle of misery. Instead of worshipping gods, many Buddhists began worshipping these enlightened beings, asking them for help not only in attaining nirvana, but also in dealing with mundane problems. Thus we find many Buddhas and bodhisattvas throughout East Asia who spend their time bringing rain, stopping plagues, and even winning bloody wars – in exchange for prayers, colourful flowers, fragrant incense and gifts of rice and candy.
The last 300 years are often depicted as an age of growing secularism, in which religions have increasingly lost their importance. If we are talking about theist religions, this is largely correct. But if we take into consideration natural-law religions, then modernity turns out to be an age of intense religious fervour, unparalleled missionary efforts, and the bloodiest wars of religion in history. The modern age has witnessed the rise of a number of new natural-law religions, such as liberalism, Communism, capitalism, nationalism and Nazism. These creeds do not like to be called religions, and refer to themselves as ideologies. But this is just a semantic exercise. If a religion is a system of human norms and values that is founded on belief in a superhuman order, then Soviet Communism was no less a religion than Islam.
Islam is of course different from Communism, because Islam sees the superhuman order governing the world as the edict of an omnipotent creator god, whereas Soviet Communism did not believe in gods. But Buddhism too gives short shrift to gods, and yet we commonly classify it as a religion. Like Buddhists, Communists believed in a superhuman order of natural and immutable laws that should guide human actions. Whereas Buddhists believe that the law of nature was discovered by Siddhartha Gautama, Communists believed that the law of nature was discovered by Karl Marx, Friedrich Engels and Vladimir Ilyich Lenin. The similarity does not end there. Like other religions, Communism too has its holy scripts and prophetic books, such as Marx’s Das Kapital, which foretold that history would soon end with the inevitable victory of the proletariat. Communism had its holidays and festivals, such as the First of May and the anniversary of the October Revolution. It had theologians adept at Marxist dialectics, and every unit in the Soviet army had a chaplain, called a commissar, who monitored the piety of soldiers and officers. Communism had martyrs, holy wars and heresies, such as Trotskyism. Soviet Communism was a fanatical and missionary religion. A devout Communist could not be a Christian or a Buddhist, and was expected to spread the gospel of Marx and Lenin even at the price of his or her life.
Religion is a system of human norms and values that is founded on belief in a superhuman order. The theory of relativity is not a religion, because (at least so far) there are no human norms and values that are founded on it. Football is not a religion because nobody argues that its rules reflect superhuman edicts. Islam, Buddhism and Communism are all religions, because all are systems of human norms and values that are founded on belief in a superhuman order. (Note the difference between ‘superhuman’ and ‘supernatural’. The Buddhist law of nature and the Marxist laws of history are superhuman, since they were not legislated by humans. Yet they are not supernatural.)
Some readers may feel very uncomfortable with this line of reasoning. If it makes you feel better, you are free to go on calling Communism an ideology rather than a religion. It makes no difference. We can divide creeds into god-centred religions and godless ideologies that claim to be based on natural laws. But then, to be consistent, we would need to catalogue at least some Buddhist, Daoist and Stoic sects as ideologies rather than religions. Conversely, we should note that belief in gods persists within many modern ideologies, and that some of them, most notably liberalism, make little sense without this belief.
It would be impossible to survey here the history of all the new modern creeds, especially because there are no clear boundaries between them. They are no less syncretic than monotheism and popular Buddhism. Just as a Buddhist could worship Hindu deities, and just as a monotheist could believe in the existence of Satan, so the typical American nowadays is simultaneously a nationalist (she believes in the existence of an American nation with a special role to play in history), a free-market capitalist (she believes that open competition and the pursuit of self-interest are the best ways to create a prosperous society), and a liberal humanist (she believes that humans have been endowed by their creator with certain inalienable rights). Nationalism will be discussed in Chapter 18. Capitalism – the most successful of the modern religions – gets a whole chapter, Chapter 16, which expounds its principal beliefs and rituals. In the remaining pages of this chapter I will address the humanist religions.
Theist religions focus on the worship of gods. Humanist religions worship humanity, or more correctly, Homo sapiens. Humanism is a belief that Homo sapiens has a unique and sacred nature, which is fundamentally different from the nature of all other animals and of all other phenomena. Humanists believe that the unique nature of Homo sapiens is the most important thing in the world, and it determines the meaning of everything that happens in the universe. The supreme good is the good of Homo sapiens. The rest of the world and all other beings exist solely for the benefit of this species.
All humanists worship humanity, but they do not agree on its definition. Humanism has split into three rival sects that fight over the exact definition of ‘humanity’, just as rival Christian sects fought over the exact definition of God. Today, the most important humanist sect is liberal humanism, which believes that ‘humanity’ is a quality of individual humans, and that the liberty of individuals is therefore sacrosanct. According to liberals, the sacred nature of humanity resides within each and every individual Homo sapiens. The inner core of individual humans gives meaning to the world, and is the source for all ethical and political authority. If we encounter an ethical or political dilemma, we should look inside and listen to our inner voice – the voice of humanity. The chief commandments of liberal humanism are meant to protect the liberty of this inner voice against intrusion or harm. These commandments are collectively known as ‘human rights’.
This, for example, is why liberals object to torture and the death penalty. In early modern Europe, murderers were thought to violate and destabilise the cosmic order. To bring the cosmos back to balance, it was necessary to torture and publicly execute the criminal, so that everyone could see the order re-established. Attending gruesome executions was a favourite pastime for Londoners and Parisians in the era of Shakespeare and Molière. In today’s Europe, murder is seen as a violation of the sacred nature of humanity. In order to restore order, present-day Europeans do not torture and execute criminals. Instead, they punish a murderer in what they see as the most ‘humane’ way possible, thus safeguarding and even rebuilding his human sanctity. By honouring the human nature of the murderer, everyone is reminded of the sanctity of humanity, and order is restored. By defending the murderer, we right what the murderer has wronged.
Even though liberal humanism sanctifies humans, it does not deny the existence of God, and is, in fact, founded on monotheist beliefs. The liberal belief in the free and sacred nature of each individual is a direct legacy of the traditional Christian belief in free and eternal individual souls. Without recourse to eternal souls and a Creator God, it becomes embarrassingly difficult for liberals to explain what is so special about individual Sapiens.
Another important sect is socialist humanism. Socialists believe that ‘humanity’ is collective rather than individualistic. They hold as sacred not the inner voice of each individual, but the species Homo sapiens as a whole. Whereas liberal humanism seeks as much freedom as possible for individual humans, socialist humanism seeks equality between all humans. According to socialists, inequality is the worst blasphemy against the sanctity of humanity, because it privileges peripheral qualities of humans over their universal essence. For example, when the rich are privileged over the poor, it means that we value money more than the universal essence of all humans, which is the same for rich and poor alike.
Like liberal humanism, socialist humanism is built on monotheist foundations. The idea that all humans are equal is a revamped version of the monotheist conviction that all souls are equal before God. The only humanist sect that has actually broken loose from traditional monotheism is evolutionary humanism, whose most famous representatives are the Nazis. What distinguished the Nazis from other humanist sects was a different definition of ‘humanity’, one deeply influenced by the theory of evolution. In contrast to other humanists, the Nazis believed that humankind is not something universal and eternal, but rather a mutable species that can evolve or degenerate. Man can evolve into superman, or degenerate into a subhuman.
The main ambition of the Nazis was to protect humankind from degeneration and encourage its progressive evolution. This is why the Nazis said that the Aryan race, the most advanced form of humanity, had to be protected and fostered, while degenerate kinds of Homo sapiens like Jews, Roma, homosexuals and the mentally ill had to be quarantined and even exterminated. The Nazis explained that Homo sapiens itself appeared when one ‘superior’ population of ancient humans evolved, whereas ‘inferior’ populations such as the Neanderthals became extinct. These different populations were at first no more than different races, but developed independently along their own evolutionary paths. This might well happen again. According to the Nazis, Homo sapiens had already divided into several distinct races, each with its own unique qualities. One of these races, the Aryan race, had the finest qualities – rationalism, beauty, integrity, diligence. The Aryan race therefore had the potential to turn man into superman. Other races, such as Jews and blacks, were today’s Neanderthals, possessing inferior qualities. If allowed to breed, and in particular to intermarry with Aryans, they would adulterate all human populations and doom Homo sapiens to extinction.
Biologists have since debunked Nazi racial theory. In particular, genetic research conducted after 1945 has demonstrated that the differences between the various human lineages are far smaller than the Nazis postulated. But these conclusions are relatively new. Given the state of scientific knowledge in 1933, Nazi beliefs were hardly outside the pale. The existence of different human races, the superiority of the white race, and the need to protect and cultivate this superior race were widely held beliefs among most Western elites. Scholars in the most prestigious Western universities, using the orthodox scientific methods of the day, published studies that allegedly proved that members of the white race were more intelligent, more ethical and more skilled than Africans or Indians. Politicians in Washington, London and Canberra took it for granted that it was their job to prevent the adulteration and degeneration of the white race, by, for example, restricting immigration from China or even Italy to ‘Aryan’ countries such as the USA and Australia.
These positions did not change simply because new scientific research was published. Sociological and political developments were far more powerful engines of change. In this sense, Hitler dug not just his own grave but that of racism in general. When he launched World War Two, he compelled his enemies to make clear distinctions between ‘us’ and ‘them’. Afterwards, precisely because Nazi ideology was so racist, racism became discredited in the West. But the change took time. White supremacy remained a mainstream ideology in American politics at least until the 1960s. The White Australia policy which restricted immigration of non-white people to Australia remained in force until 1973. Aboriginal Australians did not receive equal political rights until the 1960s, and most were prevented from voting in elections because they were deemed unfit to function as citizens.
30. A Nazi propaganda poster showing on the right a ‘racially pure Aryan’ and on the left a ‘cross-breed’. Nazi admiration for the human body is evident, as is their fear that the lower races might pollute humanity and cause its degeneration.
{Library of Congress, Bildarchiv Preussischer Kulturbesitz, United States Holocaust Memorial Museum © courtesy of Roland Klemig.}
The Nazis did not loathe humanity. They fought liberal humanism, human rights and Communism precisely because they admired humanity (according to their notions of humanity) and believed in the great potential of the human species. But following the logic of Darwinian evolution, they argued that natural selection must be allowed to weed out unfit individuals and leave only the fittest to survive and reproduce. By succouring the weak, liberalism and Communism not only allowed unfit individuals to survive, they actually gave them the opportunity to reproduce, thereby undermining natural selection. In such a world, the fittest humans would inevitably drown in a sea of unfit degenerates. Humankind would become less and less fit with each passing generation – which could lead to its extinction.
31. A Nazi cartoon of 1933. Hitler is presented as a sculptor who creates the superman. A bespectacled liberal intellectual is appalled by the violence needed to create the superman. (Note also the erotic glorification of the human body.)
{Photo: Boaz Neumann. From Kladderadatsch 49 (1933), 7.}
A 1942 German biology textbook explains in the chapter ‘The Laws of Nature and Mankind’ that the supreme law of nature is that all beings are locked in a remorseless struggle for survival. After describing how plants struggle for territory, how beetles struggle to find mates and so forth, the textbook concludes that:
The battle for existence is hard and unforgiving, but is the only way to maintain life. This struggle eliminates everything that is unfit for life, and selects everything that is able to survive . . . These natural laws are incontrovertible; living creatures demonstrate them by their very survival. They are unforgiving. Those who resist them will be wiped out. Biology not only tells us about animals and plants, but also shows us the laws we must follow in our lives, and steels our wills to live and fight according to these laws. The meaning of life is struggle. Woe to him who sins against these laws.
Then follows a quotation from Mein Kampf: ‘The person who attempts to fight the iron logic of nature thereby fights the principles he must thank for his life as a human being. To fight against nature is to bring about one’s own destruction.’3
At the dawn of the third millennium, the future of evolutionary humanism is unclear. For sixty years after the end of the war against Hitler it was taboo to link humanism with evolution and to advocate using biological methods to ‘upgrade’ Homo sapiens. But today such projects are back in vogue. No one speaks about exterminating lower races or inferior people, but many contemplate using our increasing knowledge of human biology to create superhumans.
At the same time, a huge gulf is opening between the tenets of liberal humanism and the latest findings of the life sciences, a gulf we cannot ignore much longer. Our liberal political and judicial systems are founded on the belief that every individual has a sacred inner nature, indivisible and immutable, which gives meaning to the world, and which is the source of all ethical and political authority. This is a reincarnation of the traditional Christian belief in a free and eternal soul that resides within each individual. Yet over the last 200 years, the life sciences have thoroughly undermined this belief. Scientists studying the inner workings of the human organism have found no soul there. They increasingly argue that human behaviour is determined by hormones, genes and synapses, rather than by free will – the same forces that determine the behaviour of chimpanzees, wolves, and ants. Our judicial and political systems largely try to sweep such inconvenient discoveries under the carpet. But in all frankness, how long can we maintain the wall separating the department of biology from the departments of law and political science?
COMMERCE, EMPIRES AND UNIVERSAL religions eventually brought virtually every Sapiens on every continent into the global world we live in today. Not that this process of expansion and unification was linear or without interruptions. Looking at the bigger picture, though, the transition from many small cultures to a few large cultures and finally to a single global society was probably an inevitable result of the dynamics of human history.
But saying that a global society is inevitable is not the same as saying that the end result had to be the particular kind of global society we now have. We can certainly imagine other outcomes. Why is English so widespread today, and not Danish? Why are there about 2 billion Christians and 1.25 billion Muslims, but only 150,000 Zoroastrians and no Manichaeans? If we could go back in time to 10,000 years ago and set the process going again, time after time, would we always see the rise of monotheism and the decline of dualism?
We can’t do such an experiment, so we don’t really know. But an examination of two crucial characteristics of history can provide us with some clues.
Every point in history is a crossroads. A single travelled road leads from the past to the present, but myriad paths fork off into the future. Some of those paths are wider, smoother and better marked, and are thus more likely to be taken, but sometimes history – or the people who make history – takes unexpected turns.
At the beginning of the fourth century AD, the Roman Empire faced a wide horizon of religious possibilities. It could have stuck to its traditional and variegated polytheism. But its emperor, Constantine, looking back on a fractious century of civil war, seems to have thought that a single religion with a clear doctrine could help unify his ethnically diverse realm. He could have chosen any of a number of contemporary cults to be his national faith – Manichaeism, Mithraism, the cults of Isis or Cybele, Zoroastrianism, Judaism and even Buddhism were all available options. Why did he opt for Jesus? Was there something in Christian theology that attracted him personally, or perhaps an aspect of the faith that made him think it would be easier to use for his purposes? Did he have a religious experience, or did some of his advisers suggest that the Christians were quickly gaining adherents and that it would be best to jump on that wagon? Historians can speculate, but not provide any definitive answer. They can describe how Christianity took over the Roman Empire, but they cannot explain why this particular possibility was realised.
What is the difference between describing ‘how’ and explaining ‘why’? To describe ‘how’ means to reconstruct the series of specific events that led from one point to another. To explain ‘why’ means to find causal connections that account for the occurrence of this particular series of events to the exclusion of all others.
Some scholars do indeed provide deterministic explanations of events such as the rise of Christianity. They attempt to reduce human history to the workings of biological, ecological or economic forces. They argue that there was something about the geography, genetics or economy of the Roman Mediterranean that made the rise of a monotheist religion inevitable. Yet most historians tend to be sceptical of such deterministic theories. This is one of the distinguishing marks of history as an academic discipline – the better you know a particular historical period, the harder it becomes to explain why things happened one way and not another. Those who have only a superficial knowledge of a certain period tend to focus only on the possibility that was eventually realised. They offer a just-so story to explain with hindsight why that outcome was inevitable. Those more deeply informed about the period are much more cognisant of the roads not taken.
In fact, the people who knew the period best – those alive at the time – were the most clueless of all. For the average Roman in Constantine’s time, the future was a fog. It is an iron rule of history that what looks inevitable in hindsight was far from obvious at the time. Today is no different. Are we out of the global economic crisis, or is the worst still to come? Will China continue growing until it becomes the leading superpower? Will the United States lose its hegemony? Is the upsurge of monotheistic fundamentalism the wave of the future or a local whirlpool of little long-term significance? Are we heading towards ecological disaster or technological paradise? There are good arguments to be made for all of these outcomes, but no way of knowing for sure. In a few decades, people will look back and think that the answers to all of these questions were obvious.
It is particularly important to stress that possibilities which seem very unlikely to contemporaries often get realised. When Constantine assumed the throne in 306, Christianity was little more than an esoteric Eastern sect. If you were to suggest then that it was about to become the Roman state religion, you’d have been laughed out of the room just as you would be today if you were to suggest that by the year 2050 Hare Krishna would be the state religion of the USA. In October 1913, the Bolsheviks were a small radical Russian faction. No reasonable person would have predicted that within a mere four years they would take over the country. In AD 600, the notion that a band of desert-dwelling Arabs would soon conquer an expanse stretching from the Atlantic Ocean to India was even more preposterous. Indeed, had the Byzantine army been able to repel the initial onslaught, Islam would probably have remained an obscure cult of which only a handful of cognoscenti were aware. Scholars would then have a very easy job explaining why a faith based on a revelation to a middle-aged Meccan merchant could never have caught on.
Not that everything is possible. Geographical, biological and economic forces create constraints. Yet these constraints leave ample room for surprising developments, which do not seem bound by any deterministic laws.
This conclusion disappoints many people, who prefer history to be deterministic. Determinism is appealing because it implies that our world and our beliefs are a natural and inevitable product of history. It is natural and inevitable that we live in nation states, organise our economy along capitalist principles, and fervently believe in human rights. To acknowledge that history is not deterministic is to acknowledge that it is just a coincidence that most people today believe in nationalism, capitalism and human rights.
History cannot be explained deterministically and it cannot be predicted because it is chaotic. So many forces are at work and their interactions are so complex that extremely small variations in the strength of the forces and the way they interact produce huge differences in outcomes. Not only that, but history is what is called a ‘level two’ chaotic system. Chaotic systems come in two shapes. Level one chaos is chaos that does not react to predictions about it. The weather, for example, is a level one chaotic system. Though it is influenced by myriad factors, we can build computer models that take more and more of them into consideration, and produce better and better weather forecasts.
Level two chaos is chaos that reacts to predictions about it, and therefore can never be predicted accurately. Markets, for example, are a level two chaotic system. What will happen if we develop a computer program that forecasts with 100 per cent accuracy the price of oil tomorrow? The price of oil will immediately react to the forecast, which would consequently fail to materialise. If the current price of oil is $90 a barrel, and the infallible computer program predicts that tomorrow it will be $100, traders will rush to buy oil so that they can profit from the predicted price rise. As a result, the price will shoot up to $100 a barrel today rather than tomorrow. Then what will happen tomorrow? Nobody knows.
Politics, too, is a second-order chaotic system. Many people criticise Sovietologists for failing to predict the 1989 revolutions and castigate Middle East experts for not anticipating the Arab Spring revolutions of 2011. This is unfair. Revolutions are, by definition, unpredictable. A predictable revolution never erupts.
Why not? Imagine that it’s 2010 and some genius political scientists in cahoots with a computer wizard have developed an infallible algorithm that, incorporated into an attractive interface, can be marketed as a revolution predictor. They offer their services to President Hosni Mubarak of Egypt and, in return for a generous down payment, tell Mubarak that according to their forecasts a revolution would certainly break out in Egypt during the course of the following year. How would Mubarak react? Most likely, he would immediately lower taxes, distribute billions of dollars in handouts to the citizenry – and also beef up his secret police force, just in case. The pre-emptive measures work. The year comes and goes and, surprise, there is no revolution. Mubarak demands his money back. ‘Your algorithm is worthless!’ he shouts at the scientists. ‘In the end I could have built another palace instead of giving all that money away!’ ‘But the reason the revolution didn’t happen is because we predicted it,’ the scientists say in their defence. ‘Prophets who predict things that don’t happen?’ Mubarak remarks as he motions his guards to grab them. ‘I could have picked up a dozen of those for next to nothing in the Cairo marketplace.’
So why study history? Unlike physics or economics, history is not a means for making accurate predictions. We study history not to know the future but to widen our horizons, to understand that our present situation is neither natural nor inevitable, and that we consequently have many more possibilities before us than we imagine. For example, studying how Europeans came to dominate Africans enables us to realise that there is nothing natural or inevitable about the racial hierarchy, and that the world might well be arranged differently.
We cannot explain the choices that history makes, but we can say something very important about them: history’s choices are not made for the benefit of humans. There is absolutely no proof that human well-being inevitably improves as history rolls along. There is no proof that cultures that are beneficial to humans must inexorably succeed and spread, while less beneficial cultures disappear. There is no proof that Christianity was a better choice than Manichaeism, or that the Arab Empire was more beneficial than that of the Sassanid Persians.
There is no proof that history is working for the benefit of humans because we lack an objective scale on which to measure such benefit. Different cultures define the good differently, and we have no objective yardstick by which to judge between them. The victors, of course, always believe that their definition is correct. But why should we believe the victors? Christians believe that the victory of Christianity over Manichaeism was beneficial to humankind, but if we do not accept the Christian world view then there is no reason to agree with them. Muslims believe that the fall of the Sassanid Empire into Muslim hands was beneficial to humankind. But these benefits are evident only if we accept the Muslim world view. It may well be that we’d all be better off if Christianity and Islam had been forgotten or defeated.
Ever more scholars see cultures as a kind of mental infection or parasite, with humans as its unwitting host. Organic parasites, such as viruses, live inside the body of their hosts. They multiply and spread from one host to the other, feeding off their hosts, weakening them, and sometimes even killing them. As long as the hosts live long enough to pass along the parasite, it cares little about the condition of its host. In just this fashion, cultural ideas live inside the minds of humans. They multiply and spread from one host to another, occasionally weakening the hosts and sometimes even killing them. A cultural idea – such as belief in Christian heaven above the clouds or Communist paradise here on earth – can compel a human to dedicate his or her life to spreading that idea, even at the price of death. The human dies, but the idea spreads. According to this approach, cultures are not conspiracies concocted by some people in order to take advantage of others (as Marxists tend to think). Rather, cultures are mental parasites that emerge accidentally, and thereafter take advantage of all people infected by them.
This approach is sometimes called memetics. It assumes that, just as organic evolution is based on the replication of organic information units called ‘genes’, so cultural evolution is based on the replication of cultural information units called ‘memes’.1 Successful cultures are those that excel in reproducing their memes, irrespective of the costs and benefits to their human hosts.
Most scholars in the humanities disdain memetics, seeing it as an amateurish attempt to explain cultural processes with crude biological analogies. But many of these same scholars adhere to memetics’ twin sister – postmodernism. Postmodernist thinkers speak about discourses rather than memes as the building blocks of culture. Yet they too see cultures as propagating themselves with little regard for the benefit of humankind. For example, postmodernist thinkers describe nationalism as a deadly plague that spread throughout the world in the nineteenth and twentieth centuries, causing wars, oppression, hate and genocide. The moment people in one country were infected with it, those in neighbouring countries were also likely to catch the virus. The nationalist virus presented itself as being beneficial for humans, yet it has been beneficial mainly to itself.
Similar arguments are common in the social sciences, under the aegis of game theory. Game theory explains how in multi-player systems, views and behaviour patterns that harm all players nevertheless manage to take root and spread. Arms races are a famous example. Many arms races bankrupt all those who take part in them, without really changing the military balance of power. When Pakistan buys advanced aeroplanes, India responds in kind. When India develops nuclear bombs, Pakistan follows suit. When Pakistan enlarges its navy, India counters. At the end of the process, the balance of power may remain much as it was, but meanwhile billions of dollars that could have been invested in education or health are spent on weapons. Yet the arms race dynamic is hard to resist. ‘Arms racing’ is a pattern of behaviour that spreads itself like a virus from one country to another, harming everyone, but benefiting itself, under the evolutionary criteria of survival and reproduction. (Keep in mind that an arms race, like a gene, has no awareness – it does not consciously seek to survive and reproduce. Its spread is the unintended result of a powerful dynamic.)
No matter what you call it – game theory, postmodernism or memetics – the dynamics of history are not directed towards enhancing human well-being. There is no basis for thinking that the most successful cultures in history are necessarily the best ones for Homo sapiens. Like evolution, history disregards the happiness of individual organisms. And individual humans, for their part, are usually far too ignorant and weak to influence the course of history to their own advantage.
History proceeds from one junction to the next, choosing for some mysterious reason to follow first this path, then another. Around AD 1500, history made its most momentous choice, changing not only the fate of humankind, but arguably the fate of all life on earth. We call it the Scientific Revolution. It began in western Europe, a large peninsula on the western tip of Afro-Asia, which up till then played no important role in history. Why did the Scientific Revolution begin there of all places, and not in China or India? Why did it begin at the midpoint of the second millennium AD rather than two centuries before or three centuries later? We don’t know. Scholars have proposed dozens of theories, but none of them is particularly convincing.
History has a very wide horizon of possibilities, and many possibilities are never realised. It is conceivable to imagine history going on for generations upon generations while bypassing the Scientific Revolution, just as it is conceivable to imagine history without Christianity, without a Roman Empire, and without gold coins.
32. Alamogordo, 16 July 1945, 05:29:53. Eight seconds after the first atomic bomb was detonated. The nuclear physicist Robert Oppenheimer, upon seeing the explosion, quoted from the Bhagavadgita: ‘Now I am become Death, the destroyer of worlds.’
{© Visual/Corbis.}
WERE, SAY, A SPANISH PEASANT TO HAVE fallen asleep in AD 1000 and woken up 500 years later, to the din of Columbus’ sailors boarding the Niña, Pinta and Santa Maria, the world would have seemed to him quite familiar. Despite many changes in technology, manners and political boundaries, this medieval Rip Van Winkle would have felt at home. But had one of Columbus’ sailors fallen into a similar slumber and woken up to the ringtone of a twenty-first-century iPhone, he would have found himself in a world strange beyond comprehension. ‘Is this heaven?’ he might well have asked himself. ‘Or perhaps – hell?’
The last 500 years have witnessed a phenomenal and unprecedented growth in human power. In the year 1500, there were about 500 million Homo sapiens in the entire world. Today, there are 7 billion.1 The total value of goods and services produced by humankind in the year 1500 is estimated at $250 billion, in today’s dollars.2 Nowadays the value of a year of human production is close to $60 trillion.3 In 1500, humanity consumed about 13 trillion calories of energy per day. Today, we consume 1,500 trillion calories a day.4 (Take a second look at those figures – human population has increased fourteen-fold, production 240-fold, and energy consumption 115-fold.)
Suppose a single modern battleship got transported back to Columbus’ time. In a matter of seconds it could make driftwood out of the Niña, Pinta and Santa Maria and then sink the navies of every great world power of the time without sustaining a scratch. Five modern freighters could have taken onboard all the cargo borne by the whole world’s merchant fleets.5 A modern computer could easily store every word and number in all the codex books and scrolls in every single medieval library with room to spare. Any large bank today holds more money than all the world’s premodern kingdoms put together.6
In 1500, few cities had more than 100,000 inhabitants. Most buildings were constructed of mud, wood and straw; a three-storey building was a skyscraper. The streets were rutted dirt tracks, dusty in summer and muddy in winter, plied by pedestrians, horses, goats, chickens and a few carts. The most common urban noises were human and animal voices, along with the occasional hammer and saw. At sunset, the cityscape went black, with only an occasional candle or torch flickering in the gloom. If an inhabitant of such a city could see modern Tokyo, New York or Mumbai, what would she think?
Prior to the sixteenth century, no human had circumnavigated the earth. This changed in 1522, when Magellan’s expedition returned to Spain after a journey of 44,000 miles. It took three years and cost the lives of almost all the crew members, Magellan included. In 1873, Jules Verne could imagine that Phileas Fogg, a wealthy British adventurer, might just be able to make it around the world in eighty days. Today anyone with a middle-class income can safely and easily circumnavigate the globe in just forty-eight hours.
In 1500, humans were confined to the earth’s surface. They could build towers and climb mountains, but the sky was reserved for birds, angels and deities. On 20 July 1969 humans landed on the moon. This was not merely a historical achievement, but an evolutionary and even cosmic feat. During the previous 4 billion years of evolution, no organism managed even to leave the earth’s atmosphere, and certainly none left a foot or tentacle print on the moon.
For most of history, humans knew nothing about 99.99 per cent of the organisms on the planet – namely, the microorganisms. This was not because they were of no concern to us. Each of us bears billions of one-celled creatures within us, and not just as free-riders. They are our best friends, and deadliest enemies. Some of them digest our food and clean our guts, while others cause illnesses and epidemics. Yet it was only in 1674 that a human eye first saw a microorganism, when Anton van Leeuwenhoek took a peek through his home-made microscope and was startled to see an entire world of tiny creatures milling about in a drop of water. During the subsequent 300 years, humans have made the acquaintance of a huge number of microscopic species. We’ve managed to defeat most of the deadliest contagious diseases they cause, and have harnessed microorganisms in the service of medicine and industry. Today we engineer bacteria to produce medications, manufacture biofuel and kill parasites.
But the single most remarkable and defining moment of the past 500 years came at 05:29:45 on 16 July 1945. At that precise second, American scientists detonated the first atomic bomb at Alamogordo, New Mexico. From that point onward, humankind had the capability not only to change the course of history, but to end it.
The historical process that led to Alamogordo and to the moon is known as the Scientific Revolution. During this revolution humankind has obtained enormous new powers by investing resources in scientific research. It is a revolution because, until about AD 1500, humans the world over doubted their ability to obtain new medical, military and economic powers. While government and wealthy patrons allocated funds to education and scholarship, the aim was, in general, to preserve existing capabilities rather than acquire new ones. The typical premodern ruler gave money to priests, philosophers and poets in the hope that they would legitimise his rule and maintain the social order. He did not expect them to discover new medications, invent new weapons or stimulate economic growth.
During the last five centuries, humans increasingly came to believe that they could increase their capabilities by investing in scientific research. This wasn’t just blind faith – it was repeatedly proven empirically. The more proofs there were, the more resources wealthy people and governments were willing to put into science. We would never have been able to walk on the moon, engineer microorganisms and split the atom without such investments. The US government, for example, has in recent decades allocated billions of dollars to the study of nuclear physics. The knowledge produced by this research has made possible the construction of nuclear power stations, which provide cheap electricity for American industries, which pay taxes to the US government, which uses some of these taxes to finance further research in nuclear physics.
The Scientific Revolution’s feedback loop. Science needs more than just research to make progress. It depends on the mutual reinforcement of science, politics and economics. Political and economic institutions provide the resources without which scientific research is almost impossible. In return, scientific research provides new powers that are used, among other things, to obtain new resources, some of which are reinvested in research.
Why did modern humans develop a growing belief in their ability to obtain new powers through research? What forged the bond between science, politics and economics? This chapter looks at the unique nature of modern science in order to provide part of the answer. The next two chapters examine the formation of the alliance between science, the European empires and the economics of capitalism.
Humans have sought to understand the universe at least since the Cognitive Revolution. Our ancestors put a great deal of time and effort into trying to discover the rules that govern the natural world. But modern science differs from all previous traditions of knowledge in three critical ways:
a. The willingness to admit ignorance. Modern science is based on the Latin injunction ignoramus – ‘we do not know’. It assumes that we don’t know everything. Even more critically, it accepts that the things that we think we know could be proven wrong as we gain more knowledge. No concept, idea or theory is sacred and beyond challenge.
b. The centrality of observation and mathematics. Having admitted ignorance, modern science aims to obtain new knowledge. It does so by gathering observations and then using mathematical tools to connect these observations into comprehensive theories.
c. The acquisition of new powers. Modern science is not content with creating theories. It uses these theories in order to acquire new powers, and in particular to develop new technologies.
The Scientific Revolution has not been a revolution of knowledge. It has been above all a revolution of ignorance. The great discovery that launched the Scientific Revolution was the discovery that humans do not know the answers to their most important questions.
Premodern traditions of knowledge such as Islam, Christianity, Buddhism and Confucianism asserted that everything that is important to know about the world was already known. The great gods, or the one almighty God, or the wise people of the past possessed all-encompassing wisdom, which they revealed to us in scriptures and oral traditions. Ordinary mortals gained knowledge by delving into these ancient texts and traditions and understanding them properly. It was inconceivable that the Bible, the Qur’an or the Vedas were missing out on a crucial secret of the universe – a secret that might yet be discovered by flesh-and-blood creatures.
Ancient traditions of knowledge admitted only two kinds of ignorance. First, an individual might be ignorant of something important. To obtain the necessary knowledge, all he needed to do was ask somebody wiser. There was no need to discover something that nobody yet knew. For example, if a peasant in some thirteenth-century Yorkshire village wanted to know how the human race originated, he assumed that Christian tradition held the definitive answer. All he had to do was ask the local priest.
Second, an entire tradition might be ignorant of unimportant things. By definition, whatever the great gods or the wise people of the past did not bother to tell us was unimportant. For example, if our Yorkshire peasant wanted to know how spiders weave their webs, it was pointless to ask the priest, because there was no answer to this question in any of the Christian Scriptures. That did not mean, however, that Christianity was deficient. Rather, it meant that understanding how spiders weave their webs was unimportant. After all, God knew perfectly well how spiders do it. If this were a vital piece of information, necessary for human prosperity and salvation, God would have included a comprehensive explanation in the Bible.
Christianity did not forbid people to study spiders. But spider scholars – if there were any in medieval Europe – had to accept their peripheral role in society and the irrelevance of their findings to the eternal truths of Christianity. No matter what a scholar might discover about spiders or butterflies or Galapagos finches, that knowledge was little more than trivia, with no bearing on the fundamental truths of society, politics and economics.
In fact, things were never quite that simple. In every age, even the most pious and conservative, there were people who argued that there were important things of which their entire tradition was ignorant. Yet such people were usually marginalised or persecuted – or else they founded a new tradition and began arguing that they knew everything there is to know. For example, the prophet Muhammad began his religious career by condemning his fellow Arabs for living in ignorance of the divine truth. Yet Muhammad himself very quickly began to argue that he knew the full truth, and his followers began calling him ‘The Seal of the Prophets’. Henceforth, there was no need of revelations beyond those given to Muhammad.
Modern-day science is a unique tradition of knowledge, inasmuch as it openly admits collective ignorance regarding the most important questions. Darwin never argued that he was ‘The Seal of the Biologists’, and that he had solved the riddle of life once and for all. After centuries of extensive scientific research, biologists admit that they still don’t have any good explanation for how brains produce consciousness. Physicists admit that they don’t know what caused the Big Bang, or how to reconcile quantum mechanics with the theory of general relativity.
In other cases, competing scientific theories are vociferously debated on the basis of constantly emerging new evidence. A prime example is the debates about how best to run the economy. Though individual economists may claim that their method is the best, orthodoxy changes with every financial crisis and stock-exchange bubble, and it is generally accepted that the final word on economics is yet to be said.
In still other cases, particular theories are supported so consistently by the available evidence, that all alternatives have long since fallen by the wayside. Such theories are accepted as true – yet every-one agrees that were new evidence to emerge that contradicts the theory, it would have to be revised or discarded. Good examples of these are the plate tectonics theory and the theory of evolution.
The willingness to admit ignorance has made modern science more dynamic, supple and inquisitive than any previous tradition of knowledge. This has hugely expanded our capacity to understand how the world works and our ability to invent new technologies. But it presents us with a serious problem that most of our ancestors did not have to cope with. Our current assumption that we do not know everything, and that even the knowledge we possess is tentative, extends to the shared myths that enable millions of strangers to cooperate effectively. If the evidence shows that many of those myths are doubtful, how can we hold society together? How can our communities, countries and international system function?
All modern attempts to stabilise the sociopolitical order have had no choice but to rely on either of two unscientific methods:
a. Take a scientific theory, and in opposition to common scientific practices, declare that it is a final and absolute truth. This was the method used by Nazis (who claimed that their racial policies were the corollaries of biological facts) and Communists (who claimed that Marx and Lenin had divined absolute economic truths that could never be refuted).
b. Leave science out of it and live in accordance with a non-scientific absolute truth. This has been the strategy of liberal humanism, which is built on a dogmatic belief in the unique worth and rights of human beings – a doctrine which has embarrassingly little in common with the scientific study of Homo sapiens.
But that shouldn’t surprise us. Even science itself has to rely on religious and ideological beliefs to justify and finance its research.
Modern culture has nevertheless been willing to embrace ignorance to a much greater degree than has any previous culture. One of the things that has made it possible for modern social orders to hold together is the spread of an almost religious belief in technology and in the methods of scientific research, which have replaced to some extent the belief in absolute truths.
Modern science has no dogma. Yet it has a common core of research methods, which are all based on collecting empirical observations – those we can observe with at least one of our senses – and putting them together with the help of mathematical tools.
People throughout history collected empirical observations, but the importance of these observations was usually limited. Why waste precious resources obtaining new observations when we already have all the answers we need? But as modern people came to admit that they did not know the answers to some very important questions, they found it necessary to look for completely new knowledge. Consequently, the dominant modern research method takes for granted the insufficiency of old knowledge. Instead of studying old traditions, emphasis is now placed on new observations and experiments. When present observation collides with past tradition, we give precedence to the observation. Of course, physicists analysing the spectra of distant galaxies, archaeologists analysing the finds from a Bronze Age city, and political scientists studying the emergence of capitalism do not disregard tradition. They start by studying what the wise people of the past have said and written. But from their first year in college, aspiring physicists, archaeologists and political scientists are taught that it is their mission to go beyond what Einstein, Heinrich Schliemann and Max Weber ever knew.
Mere observations, however, are not knowledge. In order to understand the universe, we need to connect observations into comprehensive theories. Earlier traditions usually formulated their theories in terms of stories. Modern science uses mathematics.
There are very few equations, graphs and calculations in the Bible, the Qur’an, the Vedas or the Confucian classics. When traditional mythologies and scriptures laid down general laws, these were presented in narrative rather than mathematical form. Thus a fundamental principle of Manichaean religion asserted that the world is a battleground between good and evil. An evil force created matter, while a good force created spirit. Humans are caught between these two forces, and should choose good over evil. Yet the prophet Mani made no attempt to offer a mathematical formula that could be used to predict human choices by quantifying the respective strength of these two forces. He never calculated that ‘the force acting on a man is equal to the acceleration of his spirit divided by the mass of his body’.
This is exactly what scientists seek to accomplish. In 1687, Isaac Newton published The Mathematical Principles of Natural Philosophy, arguably the most important book in modern history. Newton presented a general theory of movement and change. The greatness of Newton’s theory was its ability to explain and predict the movements of all bodies in the universe, from falling apples to shooting stars, using three very simple mathematical laws:
Henceforth, anyone who wished to understand and predict the movement of a cannonball or a planet simply had to make measurements of the object’s mass, direction and acceleration, and the forces acting on it. By inserting these numbers into Newton’s equations, the future position of the object could be predicted. It worked like magic. Only around the end of the nineteenth century did scientists come across a few observations that did not fit well with Newton’s laws, and these led to the next revolutions in physics – the theory of relativity and quantum mechanics.
Newton showed that the book of nature is written in the language of mathematics. Some chapters (for example) boil down to a clear-cut equation; but scholars who attempted to reduce biology, economics and psychology to neat Newtonian equations have discovered that these fields have a level of complexity that makes such an aspiration futile. This did not mean, however, that they gave up on mathematics. A new branch of mathematics was developed over the last 200 years to deal with the more complex aspects of reality: statistics.
In 1744, two Presbyterian clergymen in Scotland, Alexander Webster and Robert Wallace, decided to set up a life-insurance fund that would provide pensions for the widows and orphans of dead clergymen. They proposed that each of their church’s ministers would pay a small portion of his income into the fund, which would invest the money. If a minister died, his widow would receive dividends on the fund’s profits. This would allow her to live comfortably for the rest of her life. But to determine how much the ministers had to pay in so that the fund would have enough money to live up to its obligations, Webster and Wallace had to be able to predict how many ministers would die each year, how many widows and orphans they would leave behind, and by how many years the widows would outlive their husbands.
Take note of what the two churchmen did not do. They did not pray to God to reveal the answer. Nor did they search for an answer in the Holy Scriptures or among the works of ancient theologians. Nor did they enter into an abstract philosophical disputation. Being Scots, they were practical types. So they contacted a professor of mathematics from the University of Edinburgh, Colin Maclaurin. The three of them collected data on the ages at which people died and used these to calculate how many ministers were likely to pass away in any given year.
Their work was founded on several recent breakthroughs in the fields of statistics and probability. One of these was Jacob Bernoulli’s Law of Large Numbers. Bernoulli had codified the principle that while it might be difficult to predict with certainty a single event, such as the death of a particular person, it was possible to predict with great accuracy the average outcome of many similar events. That is, while Maclaurin could not use maths to predict whether Webster and Wallace would die next year, he could, given enough data, tell Webster and Wallace how many Presbyterian ministers in Scotland would almost certainly die next year. Fortunately, they had ready-made data that they could use. Actuary tables published fifty years previously by Edmond Halley proved particularly useful. Halley had analysed records of 1,238 births and 1,174 deaths that he obtained from the city of Breslau, Germany. Halley’s tables made it possible to see that, for example, a twenty-year-old person has a 1:100 chance of dying in a given year, but a fifty-year-old person has a 1:39 chance.
Processing these numbers, Webster and Wallace concluded that, on average, there would be 930 living Scottish Presbyterian ministers at any given moment, and an average of twenty-seven ministers would die each year, eighteen of whom would be survived by widows. Five of those who did not leave widows would leave orphaned children, and two of those survived by widows would also be outlived by children from previous marriages who had not yet reached the age of sixteen. They further computed how much time was likely to go by before the widows’ death or remarriage (in both these eventualities, payment of the pension would cease). These figures enabled Webster and Wallace to determine how much money the ministers who joined their fund had to pay in order to provide for their loved ones. By contributing £2 12s. 2d. a year, a minister could guarantee that his widowed wife would receive at least £10 a year – a hefty sum in those days. If he thought that was not enough he could choose to pay in more, up to a level of £6 11s. 3d. a year – which would guarantee his widow the even more handsome sum of £25 a year.
According to their calculations, by the year 1765 the Fund for a Provision for the Widows and Children of the Ministers of the Church of Scotland would have capital totalling £58,348. Their calculations proved amazingly accurate. When that year arrived, the fund’s capital stood at £58,347 – just £1 less than the prediction! This was even better than the prophecies of Habakkuk, Jeremiah or St John. Today, Webster and Wallace’s fund, known simply as Scottish Widows, is one of the largest pension and insurance companies in the world. With assets worth £100 billion, it insures not only Scottish widows, but anyone willing to buy its policies.7
Probability calculations such as those used by the two Scottish ministers became the foundation not merely of actuarial science, which is central to the pension and insurance business, but also of the science of demography (founded by another clergyman, the Anglican Robert Malthus). Demography in its turn was the cornerstone on which Charles Darwin (who almost became an Anglican pastor) built his theory of evolution. While there are no equations that predict what kind of organism will evolve under a specific set of conditions, geneticists use probability calculations to compute the likelihood that a particular mutation will spread in a given population. Similar probabilistic models have become central to economics, sociology, psychology, political science and the other social and natural sciences. Even physics eventually supplemented Newton’s classical equations with the probability clouds of quantum mechanics.
We need merely look at the history of education to realise how far this process has taken us. Throughout most of history, mathematics was an esoteric field that even educated people rarely studied seriously. In medieval Europe, logic, grammar and rhetoric formed the educational core, while the teaching of mathematics seldom went beyond simple arithmetic and geometry. Nobody studied statistics. The undisputed monarch of all sciences was theology.
Today few students study rhetoric; logic is restricted to philosophy departments, and theology to seminaries. But more and more students are motivated – or forced – to study mathematics. There is an irresistible drift towards the exact sciences – defined as ‘exact’ by their use of mathematical tools. Even fields of study that were traditionally part of the humanities, such as the study of human language (linguistics) and the human psyche (psychology), rely increasingly on mathematics and seek to present themselves as exact sciences. Statistics courses are now part of the basic requirements not just in physics and biology, but also in psychology, sociology, economics and political science.
In the course catalogue of the psychology department at my own university, the first required course in the curriculum is ‘Introduction to Statistics and Methodology in Psychological Research’. Second-year psychology students must take ‘Statistical Methods in Psychological Research’. Confucius, Buddha, Jesus and Muhammad would have been bewildered if you told them that in order to understand the human mind and cure its illnesses you must first study statistics.
Most people have a hard time digesting modern science because its mathematical language is difficult for our minds to grasp, and its findings often contradict common sense. Out of the 7 billion people in the world, how many really understand quantum mechanics, cell biology or macroeconomics? Science nevertheless enjoys immense prestige because of the new powers it gives us. Presidents and generals may not understand nuclear physics, but they have a good grasp of what nuclear bombs can do.
In 1620 Francis Bacon published a scientific manifesto titled The New Instrument. In it he argued that ‘knowledge is power’. The real test of ‘knowledge’ is not whether it is true, but whether it empowers us. Scientists usually assume that no theory is 100 per cent correct. Consequently, truth is a poor test for knowledge. The real test is utility. A theory that enables us to do new things constitutes knowledge.
Over the centuries, science has offered us many new tools. Some are mental tools, such as those used to predict death rates and economic growth. Even more important are technological tools. The connection forged between science and technology is so strong that today people tend to confuse the two. We often think that it is impossible to develop new technologies without scientific research, and that there is little point in research if it does not result in new technologies.
In fact, the relationship between science and technology is a very recent phenomenon. Prior to 1500, science and technology were totally separate fields. When Bacon connected the two in the early seventeenth century, it was a revolutionary idea. During the seventeenth and eighteenth centuries this relationship tightened, but the knot was tied only in the nineteenth century. Even in 1800, most rulers who wanted a strong army, and most business magnates who wanted a successful business, did not bother to finance research in physics, biology or economics.
I don’t mean to claim that there is no exception to this rule. A good historian can find precedent for everything. But an even better historian knows when these precedents are but curiosities that cloud the big picture. Generally speaking, most premodern rulers and business people did not finance research about the nature of the universe in order to develop new technologies, and most thinkers did not try to translate their findings into technological gadgets. Rulers financed educational institutions whose mandate was to spread traditional knowledge for the purpose of buttressing the existing order.
Here and there people did develop new technologies, but these were usually created by uneducated craftsmen using trial and error, not by scholars pursuing systematic scientific research. Cart manufacturers built the same carts from the same materials year in year out. They did not set aside a percentage of their annual profits in order to research and develop new cart models. Cart design occasionally improved, but it was usually thanks to the ingenuity of some local carpenter who never set foot in a university and did not even know how to read.
This was true of the public as well as the private sector. Whereas modern states call in their scientists to provide solutions in almost every area of national policy, from energy to health to waste disposal, ancient kingdoms seldom did so. The contrast between then and now is most pronounced in weaponry. When outgoing President Dwight Eisenhower warned in 1961 of the growing power of the military-industrial complex, he left out a part of the equation. He should have alerted his country to the military-industrial-scientific complex, because today’s wars are scientific productions. The world’s military forces initiate, fund and steer a large part of humanity’s scientific research and technological development.
When World War One bogged down into interminable trench warfare, both sides called in the scientists to break the deadlock and save the nation. The men in white answered the call, and out of the laboratories rolled a constant stream of new wonder-weapons: combat aircraft, poison gas, tanks, submarines and ever more efficient machine guns, artillery pieces, rifles and bombs.
33. German V-2 rocket ready to launch. It didn’t defeat the Allies, but it kept the Germans hoping for a technological miracle until the very last days of the war.
{© Ria Novosti/Science Photo Library.}
Science played an even larger role in World War Two. By late 1944 Germany was losing the war and defeat was imminent. A year earlier, the Germans’ allies, the Italians, had toppled Mussolini and surrendered to the Allies. But Germany kept fighting on, even though the British, American and Soviet armies were closing in. One reason German soldiers and civilians thought not all was lost was that they believed German scientists were about to turn the tide with so-called miracle weapons such as the V-2 rocket and jet-powered aircraft.
While the Germans were working on rockets and jets, the American Manhattan Project successfully developed atomic bombs. By the time the bomb was ready, in early August 1945, Germany had already surrendered, but Japan was fighting on. American forces were poised to invade its home islands. The Japanese vowed to resist the invasion and fight to the death, and there was every reason to believe that it was no idle threat. American generals told President Harry S. Truman that an invasion of Japan would cost the lives of a million American soldiers and would extend the war well into 1946. Truman decided to use the new bomb. Two weeks and two atom bombs later, Japan surrendered unconditionally and the war was over.
But science is not just about offensive weapons. It plays a major role in our defences as well. Today many Americans believe that the solution to terrorism is technological rather than political. Just give millions more to the nanotechnology industry, they believe, and the United States could send bionic spy-flies into every Afghan cave, Yemenite redoubt and North African encampment. Once that’s done, Osama Bin Laden’s heirs will not be able to make a cup of coffee without a CIA spy-fly passing this vital information back to headquarters in Langley. Allocate millions more to brain research, and every airport could be equipped with ultra-sophisticated FMRI scanners that could immediately recognise angry and hateful thoughts in people’s brains. Will it really work? Who knows. Is it wise to develop bionic flies and thought-reading scanners? Not necessarily. Be that as it may, as you read these lines, the US Department of Defense is transferring millions of dollars to nanotechnology and brain laboratories for work on these and other such ideas.
This obsession with military technology – from tanks to atom bombs to spy-flies – is a surprisingly recent phenomenon. Up until the nineteenth century, the vast majority of military revolutions were the product of organisational rather than technological changes. When alien civilisations met for the first time, technological gaps sometimes played an important role. But even in such cases, few thought of deliberately creating or enlarging such gaps. Most empires did not rise thanks to technological wizardry, and their rulers did not give much thought to technological improvement. The Arabs did not defeat the Sassanid Empire thanks to superior bows or swords, the Seljuks had no technological advantage over the Byzantines, and the Mongols did not conquer China with the help of some ingenious new weapon. In fact, in all these cases the vanquished enjoyed superior military and civilian technology.
The Roman army is a particularly good example. It was the best army of its day, yet technologically speaking, Rome had no edge over Carthage, Macedonia or the Seleucid Empire. Its advantage rested on efficient organisation, iron discipline and huge manpower reserves. The Roman army never set up a research and development department, and its weapons remained more or less the same for centuries on end. If the legions of Scipio Aemilianus – the general who levelled Carthage and defeated the Numantians in the second century BC – had suddenly popped up 500 years later in the age of Constantine the Great, Scipio would have had a fair chance of beating Constantine. Now imagine what would happen to a general from a few centuries back – say Napoleon – if he led his troops against a modern armoured brigade. Napoleon was a brilliant tactician, and his men were crack professionals, but their skills would be useless in the face of modern weaponry.
As in Rome, so also in ancient China: most generals and philosophers did not think it their duty to develop new weapons. The most important military invention in the history of China was gunpowder. Yet to the best of our knowledge, gunpowder was invented accidentally, by Daoist alchemists searching for the elixir of life. Gunpowder’s subsequent career is even more telling. One might have thought that the Daoist alchemists would have made China master of the world. In fact, the Chinese used the new compound mainly for firecrackers. Even as the Song Empire collapsed in the face of a Mongol invasion, no emperor set up a medieval Manhattan Project to save the empire by inventing a doomsday weapon. Only in the fifteenth century – about 600 years after the invention of gunpowder – did cannons become a decisive factor on Afro-Asian battlefields. Why did it take so long for the deadly potential of this substance to be put to military use? Because it appeared at a time when neither kings, scholars, nor merchants thought that new military technology could save them or make them rich.
The situation began to change in the fifteenth and sixteenth centuries, but another 200 years went by before most rulers evinced any interest in financing the research and development of new weapons. Logistics and strategy continued to have far greater impact on the outcome of wars than technology. The Napoleonic military machine that crushed the armies of the European powers at Austerlitz (1805) was armed with more or less the same weaponry that the army of Louis XVI had used. Napoleon himself, despite being an artilleryman, had little interest in new weapons, even though scientists and inventors tried to persuade him to fund the development of flying machines, submarines and rockets.
Science, industry and military technology intertwined only with the advent of the capitalist system and the Industrial Revolution. Once this relationship was established, however, it quickly transformed the world.
Until the Scientific Revolution most human cultures did not believe in progress. They thought the golden age was in the past, and that the world was stagnant, if not deteriorating. Strict adherence to the wisdom of the ages might perhaps bring back the good old times, and human ingenuity might conceivably improve this or that facet of daily life. However, it was considered impossible for human know-how to overcome the world’s fundamental problems. If even Muhammad, Jesus, Buddha and Confucius – who knew everything there is to know – were unable to abolish famine, disease, poverty and war from the world, how could we expect to do so?
Many faiths believed that some day a messiah would appear and end all wars, famines and even death itself. But the notion that humankind could do so by discovering new knowledge and inventing new tools was worse than ludicrous – it was hubris. The story of the Tower of Babel, the story of Icarus, the story of the Golem and countless other myths taught people that any attempt to go beyond human limitations would inevitably lead to disappointment and disaster.
When modern culture admitted that there were many important things that it still did not know, and when that admission of ignorance was married to the idea that scientific discoveries could give us new powers, people began suspecting that real progress might be possible after all. As science began to solve one unsolvable problem after another, many became convinced that humankind could overcome any and every problem by acquiring and applying new knowledge. Poverty, sickness, wars, famines, old age and death itself were not the inevitable fate of humankind. They were simply the fruits of our ignorance.
34. Benjamin Franklin disarming the gods.
{Painting: Franklin’s Experiment, June 1752, published by Currier & Ives © Museum of the City of New York/Corbis.}
A famous example is lightning. Many cultures believed that lightning was the hammer of an angry god, used to punish sinners. In the middle of the eighteenth century, in one of the most celebrated experiments in scientific history, Benjamin Franklin flew a kite during a lightning storm to test the hypothesis that lightning is simply an electric current. Franklin’s empirical observations, coupled with his knowledge about the qualities of electrical energy, enabled him to invent the lightning rod and disarm the gods.
Poverty is another case in point. Many cultures have viewed poverty as an inescapable part of this imperfect world. According to the New Testament, shortly before the crucifixion a woman anointed Christ with precious oil worth 300 denarii. Jesus’ disciples scolded the woman for wasting such a huge sum of money instead of giving it to the poor, but Jesus defended her, saying that ‘The poor you will always have with you, and you can help them any time you want. But you will not always have me’ (Mark 14:7). Today, fewer and fewer people, including fewer and fewer Christians, agree with Jesus on this matter. Poverty is increasingly seen as a technical problem amenable to intervention. It’s common wisdom that policies based on the latest findings in agronomy, economics, medicine and sociology can eliminate poverty.
And indeed, many parts of the world have already been freed from the worst forms of deprivation. Throughout history, societies have suffered from two kinds of poverty: social poverty, which withholds from some people the opportunities available to others; and biological poverty, which puts the very lives of individuals at risk due to lack of food and shelter. Perhaps social poverty can never be eradicated, but in many countries around the world biological poverty is a thing of the past.
Until recently, most people hovered very close to the biological poverty line, below which a person lacks enough calories to sustain life for long. Even small miscalculations or misfortunes could easily push people below that line, into starvation. Natural disasters and man-made calamities often plunged entire populations over the abyss, causing the death of millions. Today most of the world’s people have a safety net stretched below them. Individuals are protected from personal misfortune by insurance, state-sponsored social security and a plethora of local and international NGOs. When calamity strikes an entire region, worldwide relief efforts are usually successful in preventing the worst. People still suffer from numerous degradations, humiliations and poverty-related illnesses, but in most countries nobody is starving to death. In fact, in many societies more people are in danger of dying from obesity than from starvation.
Of all mankind’s ostensibly insoluble problems, one has remained the most vexing, interesting and important: the problem of death itself. Before the late modern era, most religions and ideologies took it for granted that death was our inevitable fate. Moreover, most faiths turned death into the main source of meaning in life. Try to imagine Islam, Christianity or the ancient Egyptian religion in a world without death. These creeds taught people that they must come to terms with death and pin their hopes on the afterlife, rather than seek to overcome death and live forever here on earth. The best minds were busy giving meaning to death, not trying to escape it.
That is the theme of the most ancient myth to come down to us – the Gilgamesh myth of ancient Sumer. Its hero is the strongest and most capable man in the world, King Gilgamesh of Uruk, who could defeat anyone in battle. One day, Gilgamesh’s best friend, Enkidu, died. Gilgamesh sat by the body and observed it for many days, until he saw a worm dropping out of his friend’s nostril. At that moment Gilgamesh was gripped by a terrible horror, and he resolved that he himself would never die. He would somehow find a way to defeat death. Gilgamesh then undertook a journey to the end of the universe, killing lions, battling scorpion-men and finding his way into the underworld. There he shattered the mysterious “stone things” of Urshanabi, the ferryman of the river of the dead, and found Utnapishtim, the last survivor of the primordial flood. Yet Gilgamesh failed in his quest. He returned home empty-handed, as mortal as ever, but with one new piece of wisdom. When the gods created man, Gilgamesh had learned, they set death as man’s inevitable destiny, and man must learn to live with it.
Disciples of progress do not share this defeatist attitude. For men of science, death is not an inevitable destiny, but merely a technical problem. People die not because the gods decreed it, but due to various technical failures – a heart attack, cancer, an infection. And every technical problem has a technical solution. If the heart flutters, it can be stimulated by a pacemaker or replaced by a new heart. If cancer rampages, it can be killed with drugs or radiation. If bacteria proliferate, they can be subdued with antibiotics. True, at present we cannot solve all technical problems. But we are working on them. Our best minds are not wasting their time trying to give meaning to death. Instead, they are busy investigating the physiological, hormonal and genetic systems responsible for disease and old age. They are developing new medicines, revolutionary treatments and artificial organs that will lengthen our lives and might one day vanquish the Grim Reaper himself.
Until recently, you would not have heard scientists, or anyone else, speak so bluntly. ‘Defeat death?! What nonsense! We are only trying to cure cancer, tuberculosis and Alzheimer’s disease,’ they insisted. People avoided the issue of death because the goal seemed too elusive. Why create unreasonable expectations? We’re now at a point, however, where we can be frank about it. The leading project of the Scientific Revolution is to give humankind eternal life. Even if killing death seems a distant goal, we have already achieved things that were inconceivable a few centuries ago. In 1199, King Richard the Lionheart was struck by an arrow in his left shoulder. Today we’d say he incurred a minor injury. But in 1199, in the absence of antibiotics and effective sterilisation methods, this minor flesh wound turned infected and gangrene set in. The only way to stop the spread of gangrene in twelfth-century Europe was to cut off the infected limb, impossible when the infection was in a shoulder. The gangrene spread through the Lionheart’s body and no one could help the king. He died in great agony two weeks later.
As recently as the nineteenth century, the best doctors still did not know how to prevent infection and stop the putrefaction of tissues. In field hospitals doctors routinely cut off the hands and legs of soldiers who received even minor limb injuries, fearing gangrene. These amputations, as well as all other medical procedures (such as tooth extraction), were done without any anaesthetics. The first anaesthetics – ether, chloroform and morphine – entered regular usage in Western medicine only in the middle of the nineteenth century. Before the advent of chloroform, four soldiers had to hold down a wounded comrade while the doctor sawed off the injured limb. On the morning after the battle of Waterloo (1815), heaps of sawn-off hands and legs could be seen adjacent to the field hospitals. In those days, carpenters and butchers who enlisted to the army were often sent to serve in the medical corps, because surgery required little more than knowing your way with knives and saws.
In the two centuries since Waterloo, things have changed beyond recognition. Pills, injections and sophisticated operations save us from a spate of illnesses and injuries that once dealt an inescapable death sentence. They also protect us against countless daily aches and ailments, which premodern people simply accepted as part of life. The average life expectancy jumped from around twenty-five to forty years, to around sixty-seven in the entire world, and to around eighty years in the developed world.8
Death suffered its worst setbacks in the arena of child mortality. Until the twentieth century, between a quarter and a third of the children of agricultural societies never reached adulthood. Most succumbed to childhood diseases such as diphtheria, measles and smallpox. In seventeenth-century England, 150 out of every 1,000 newborns died during their first year, and a third of all children were dead before they reached fifteen.9 Today, only five out of 1,000 English babies die during their first year, and only seven out of 1,000 die before age fifteen.10
We can better grasp the full impact of these figures by setting aside statistics and telling some stories. A good example is the family of King Edward I of England (1237–1307) and his wife, Queen Eleanor (1241–90). Their children enjoyed the best conditions and the most nurturing surroundings that could be provided in medieval Europe. They lived in palaces, ate as much food as they liked, had plenty of warm clothing, well-stocked fireplaces, the cleanest water available, an army of servants and the best doctors. The sources mention sixteen children that Queen Eleanor bore between 1255 and 1284:
1. An anonymous daughter, born in 1255, died at birth.
2. A daughter, Catherine, died either at age one or age three.
3. A daughter, Joan, died at six months.
4. A son, John, died at age five.
5. A son, Henry, died at age six.
6. A daughter, Eleanor, died at age twenty-nine.
7. An anonymous daughter died at five months.
8. A daughter, Joan, died at age thirty-five.
9. A son, Alphonso, died at age ten.
10. A daughter, Margaret, died at age fifty-eight.
11. A daughter, Berengeria, died at age two.
12. An anonymous daughter died shortly after birth.
13. A daughter, Mary, died at age fifty-three.
14. An anonymous son died shortly after birth.
15. A daughter, Elizabeth, died at age thirty-four.
16. A son, Edward.
The youngest, Edward, was the first of the boys to survive the dangerous years of childhood, and at his father’s death he ascended the English throne as King Edward II. In other words, it took Eleanor sixteen tries to carry out the most fundamental mission of an English queen – to provide her husband with a male heir. Edward II’s mother must have been a woman of exceptional patience and fortitude. Not so the woman Edward chose for his wife, Isabella of France. She had him murdered when he was forty-three.11
To the best of our knowledge, Eleanor and Edward I were a healthy couple and passed no fatal hereditary illnesses on to their children. Nevertheless, ten out of the sixteen – 62 per cent – died during childhood. Only six managed to live beyond the age of eleven, and only three – just 18 per cent – lived beyond the age of forty. In addition to these births, Eleanor most likely had a number of pregnancies that ended in miscarriage. On average, Edward and Eleanor lost a child every three years, ten children one after another. It’s nearly impossible for a parent today to imagine such loss.
How long will the Gilgamesh Project – the quest for immortality – take to complete? A hundred years? Five hundred years? A thousand years? When we recall how little we knew about the human body in 1900, and how much knowledge we have gained in a single century, there is cause for optimism. Genetic engineers have recently managed to double the average life expectancy of Caenorhabditis elegans worms.12 Could they do the same for Homo sapiens? Nanotechnology experts are developing a bionic immune system composed of millions of nano-robots, who would inhabit our bodies, open blocked blood vessels, fight viruses and bacteria, eliminate cancerous cells and even reverse ageing processes.13 A few serious scholars suggest that by 2050, some humans will become a-mortal (not immortal, because they could still die of some accident, but a-mortal, meaning that in the absence of fatal trauma their lives could be extended indefinitely).
Whether or not Project Gilgamesh succeeds, from a historical perspective it is fascinating to see that most late-modern religions and ideologies have already taken death and the afterlife out of the equation. Until the eighteenth century, religions considered death and its aftermath central to the meaning of life. Beginning in the eighteenth century, religions and ideologies such as liberalism, socialism and feminism lost all interest in the afterlife. What, exactly, happens to a Communist after he or she dies? What happens to a capitalist? What happens to a feminist? It is pointless to look for the answer in the writings of Marx, Adam Smith or Simone de Beauvoir. The only modern ideology that still awards death a central role is nationalism. In its more poetic and desperate moments, nationalism promises that whoever dies for the nation will forever live in its collective memory. Yet this promise is so fuzzy that even most nationalists do not really know what to make of it.
We are living in a technical age. Many are convinced that science and technology hold the answers to all our problems. We should just let the scientists and technicians go on with their work, and they will create heaven here on earth. But science is not an enterprise that takes place on some superior moral or spiritual plane above the rest of human activity. Like all other parts of our culture, it is shaped by economic, political and religious interests.
Science is a very expensive affair. A biologist seeking to understand the human immune system requires laboratories, test tubes, chemicals and electron microscopes, not to mention lab assistants, electricians, plumbers and cleaners. An economist seeking to model credit markets must buy computers, set up giant databanks and develop complicated data-processing programs. An archaeologist who wishes to understand the behaviour of archaic hunter-gatherers must travel to distant lands, excavate ancient ruins and date fossilised bones and artefacts. All of this costs money.
During the past 500 years modern science has achieved wonders thanks largely to the willingness of governments, businesses, foundations and private donors to channel billions of dollars into scientific research. These billions have done much more to chart the universe, map the planet and catalogue the animal kingdom than did Galileo Galilei, Christopher Columbus and Charles Darwin. If these particular geniuses had never been born, their insights would probably have occurred to others. But if the proper funding were unavailable, no intellectual brilliance could have compensated for that. If Darwin had never been born, for example, we’d today attribute the theory of evolution to Alfred Russel Wallace, who came up with the idea of evolution via natural selection independently of Darwin and just a few years later. But if the European powers had not financed geographical, zoological and botanical research around the world, neither Darwin nor Wallace would have had the necessary empirical data to develop the theory of evolution. It is likely that they would not even have tried.
Why did the billions start flowing from government and business coffers into labs and universities? In academic circles, many are naïve enough to believe in pure science. They believe that government and business altruistically give them money to pursue whatever research projects strike their fancy. But this hardly describes the realities of science funding.
Most scientific studies are funded because somebody believes they can help attain some political, economic or religious goal. For example, in the sixteenth century, kings and bankers channelled enormous resources to finance geographical expeditions around the world but not a penny for studying child psychology. This is because kings and bankers surmised that the discovery of new geographical knowledge would enable them to conquer new lands and set up trade empires, whereas they couldn’t see any profit in understanding child psychology.
In the 1940s the governments of America and the Soviet Union channelled enormous resources to the study of nuclear physics rather than underwater archaeology. They surmised that studying nuclear physics would enable them to develop nuclear weapons, whereas underwater archaeology was unlikely to help win wars. Scientists themselves are not always aware of the political, economic and religious interests that control the flow of money; many scientists do, in fact, act out of pure intellectual curiosity. However, only rarely do scientists dictate the scientific agenda.
Even if we wanted to finance pure science unaffected by political, economic or religious interests, it would probably be impossible. Our resources are limited, after all. Ask a congressman to allocate an additional million dollars to the National Science Foundation for basic research, and he’ll justifiably ask whether that money wouldn’t be better used to fund teacher training or to give a needed tax break to a troubled factory in his district. To channel limited resources we must answer questions such as ‘What is more important?’ and ‘What is good?’ And these are not scientific questions. Science can explain what exists in the world, how things work, and what might be in the future. By definition, it has no pretensions to knowing what should be in the future. Only religions and ideologies seek to answer such questions.
Consider the following quandary: two biologists from the same department, possessing the same professional skills, have both applied for a million-dollar grant to finance their current research projects. Professor Slughorn wants to study a disease that infects the udders of cows, causing a 10 per cent decrease in their milk production. Professor Sprout wants to study whether cows suffer mentally when they are separated from their calves. Assuming that the amount of money is limited, and that it is impossible to finance both research projects, which one should be funded?
There is no scientific answer to this question. There are only political, economic and religious answers. In today’s world, it is obvious that Slughorn has a better chance of getting the money. Not because udder diseases are scientifically more interesting than bovine mentality, but because the dairy industry, which stands to benefit from the research, has more political and economic clout than the animal-rights lobby.
Perhaps in a strict Hindu society, where cows are sacred, or in a society committed to animal rights, Professor Sprout would have a better shot. But as long as she lives in a society that values the commercial potential of milk and the health of its human citizens over the feelings of cows, she’d best write up her research proposal so as to appeal to those assumptions. For example, she might write that ‘Depression leads to a decrease in milk production. If we understand the mental world of dairy cows, we could develop psychiatric medication that will improve their mood, thus raising milk production by up to 10 per cent. I estimate that there is a global annual market of $250 million for bovine psychiatric medications.’
Science is unable to set its own priorities. It is also incapable of determining what to do with its discoveries. For example, from a purely scientific viewpoint it is unclear what we should do with our increasing understanding of genetics. Should we use this knowledge to cure cancer, to create a race of genetically engineered supermen, or to engineer dairy cows with super-sized udders? It is obvious that a liberal government, a Communist government, a Nazi government and a capitalist business corporation would use the very same scientific discovery for completely different purposes, and there is no scientific reason to prefer one usage over others.
In short, scientific research can flourish only in alliance with some religion or ideology. The ideology justifies the costs of the research. In exchange, the ideology influences the scientific agenda and determines what to do with the discoveries. Hence in order to comprehend how humankind has reached Alamogordo and the moon – rather than any number of alternative destinations – it is not enough to survey the achievements of physicists, biologists and sociologists. We have to take into account the ideological, political and economic forces that shaped physics, biology and sociology, pushing them in certain directions while neglecting others.
Two forces in particular deserve our attention: imperialism and capitalism. The feedback loop between science, empire and capital has arguably been history’s chief engine for the past 500 years. The following chapters analyse its workings. First we’ll look at how the twin turbines of science and empire were latched to one another, and then learn how both were hitched up to the money pump of capitalism.
HOW FAR IS THE SUN FROM THE EARTH? It’s a question that intrigued many early modern astronomers, particularly after Copernicus argued that the sun, rather than the earth, is located at the centre of the universe. A number of astronomers and mathematicians tried to calculate the distance, but their methods provided widely varying results. A reliable means of making the measurement was finally proposed in the middle of the eighteenth century. Every few years, the planet Venus passes directly between the sun and the earth. The duration of the transit differs when seen from distant points on the earth’s surface because of the tiny difference in the angle at which the observer sees it. If several observations of the same transit were made from different continents, simple trigonometry was all it would take to calculate our exact distance from the sun.
Astronomers predicted that the next Venus transits would occur in 1761 and 1769. So expeditions were sent from Europe to the four corners of the world in order to observe the transits from as many distant points as possible. In 1761 scientists observed the transit from Siberia, North America, Madagascar and South Africa. As the 1769 transit approached, the European scientific community mounted a supreme effort, and scientists were dispatched as far as northern Canada and California (which was then a wilderness). The Royal Society of London for the Improvement of Natural Knowledge concluded that this was not enough. To obtain the most accurate results it was imperative to send an astronomer all the way to the south-western Pacific Ocean.
The Royal Society resolved to send an eminent astronomer, Charles Green, to Tahiti, and spared neither effort nor money. But, since it was funding such an expensive expedition, it hardly made sense to use it to make just a single astronomical observation. Green was therefore accompanied by a team of eight other scientists from several disciplines, headed by botanists Joseph Banks and Daniel Solander. The team also included artists assigned to produce drawings of the new lands, plants, animals and peoples that the scientists would no doubt encounter. Equipped with the most advanced scientific instruments that Banks and the Royal Society could buy, the expedition was placed under the command of Captain James Cook, an experienced seaman as well as an accomplished geographer and ethnographer.
The expedition left England in 1768, observed the Venus transit from Tahiti in 1769, reconnoitred several Pacific islands, visited Australia and New Zealand, and returned to England in 1771. It brought back enormous quantities of astronomical, geographical, meteorological, botanical, zoological and anthropological data. Its findings made major contributions to a number of disciplines, sparked the imagination of Europeans with astonishing tales of the South Pacific, and inspired future generations of naturalists and astronomers.
One of the fields that benefited from the Cook expedition was medicine. At the time, ships that set sail to distant shores knew that more than half their crew members would die on the journey. The nemesis was not angry natives, enemy warships or homesickness. It was a mysterious ailment called scurvy. Men who came down with the disease grew lethargic and depressed, and their gums and other soft tissues bled. As the disease progressed, their teeth fell out, open sores appeared and they grew feverish, jaundiced, and lost control of their limbs. Between the sixteenth and eighteenth centuries, scurvy is estimated to have claimed the lives of about 2 million sailors. No one knew what caused it, and no matter what remedy was tried, sailors continued to die in droves. The turning point came in 1747, when a British physician, James Lind, conducted a controlled experiment on sailors who suffered from the disease. He separated them into several groups and gave each group a different treatment. One of the test groups was instructed to eat citrus fruits, a common folk remedy for scurvy. The patients in this group promptly recovered. Lind did not know what the citrus fruits had that the sailors’ bodies lacked, but we now know that it is vitamin C. A typical shipboard diet at that time was notably lacking in foods that are rich in this essential nutrient. On long-range voyages sailors usually subsisted on biscuits and beef jerky, and ate almost no fruits or vegetables.
The Royal Navy was not convinced by Lind’s experiments, but James Cook was. He resolved to prove the doctor right. He loaded his boat with a large quantity of sauerkraut and ordered his sailors to eat lots of fresh fruits and vegetables whenever the expedition made landfall. Cook did not lose a single sailor to scurvy. In the following decades, all the world’s navies adopted Cook’s nautical diet, and the lives of countless sailors and passengers were saved.1
However, the Cook expedition had another, far less benign result. Cook was not only an experienced seaman and geographer, but also a naval officer. The Royal Society financed a large part of the expedition’s expenses, but the ship itself was provided by the Royal Navy. The navy also seconded eighty-five well-armed sailors and marines, and equipped the ship with artillery, muskets, gunpowder and other weaponry. Much of the information collected by the expedition – particularly the astronomical, geographical, meteorological and anthropological data – was of obvious political and military value. The discovery of an effective treatment for scurvy greatly contributed to British control of the world’s oceans and its ability to send armies to the other side of the world. Cook claimed for Britain many of the islands and lands he ‘discovered’, most notably Australia. The Cook expedition laid the foundation for the British occupation of the south-western Pacific Ocean; for the conquest of Australia, Tasmania and New Zealand; for the settlement of millions of Europeans in the new colonies; and for the extermination of their native cultures and most of their native populations.2
In the century following the Cook expedition, the most fertile lands of Australia and New Zealand were taken from their previous inhabitants by European settlers. The native population dropped by up to 90 per cent and the survivors were subjected to a harsh regime of racial oppression. For the Aborigines of Australia and the Maoris of New Zealand, the Cook expedition was the beginning of a catastrophe from which they have never recovered.
An even worse fate befell the natives of Tasmania. Having survived for 10,000 years in splendid isolation, they were completely wiped out, to the last man, woman and child, within a century of Cook’s arrival. European settlers first drove them off the richest parts of the island, and then, coveting even the remaining wilderness, hunted them down and killed them systematically. The few survivors were hounded into an evangelical concentration camp, where well-meaning but not particularly open-minded missionaries tried to indoctrinate them in the ways of the modern world. The Tasmanians were instructed in reading and writing, Christianity and various ‘productive skills’ such as sewing clothes and farming. But they refused to learn. They became ever more melancholic, stopped having children, lost all interest in life, and finally chose the only escape route from the modern world of science and progress – death.
Alas, science and progress pursued them even to the afterlife. The corpses of the last Tasmanians were seized in the name of science by anthropologists and curators. They were dissected, weighed and measured, and analysed in learned articles. The skulls and skeletons were then put on display in museums and anthropological collections. Only in 1976 did the Tasmanian Museum give up for burial the skeleton of Truganini, the last native Tasmanian, who had died a hundred years earlier. The English Royal College of Surgeons held on to samples of her skin and hair until 2002.
Was Cook’s ship a scientific expedition protected by a military force or a military expedition with a few scientists tagging along? That’s like asking whether your petrol tank is half empty or half full. It was both. The Scientific Revolution and modern imperialism were inseparable. People such as Captain James Cook and the botanist Joseph Banks could hardly distinguish science from empire. Nor could luckless Truganini.
The fact that people from a large island in the northern Atlantic conquered a large island south of Australia is one of history’s more bizarre occurrences. Not long before Cook’s expedition, the British Isles and western Europe in general were but distant backwaters of the Mediterranean world. Little of importance ever happened there. Even the Roman Empire – the only important premodern European empire – derived most of its wealth from its North African, Balkan and Middle Eastern provinces. Rome’s western European provinces were a poor Wild West, which contributed little aside from minerals and slaves. Northern Europe was so desolate and barbarous that it wasn’t even worth conquering.
35. Truganini, the last native Tasmanian.
{Portrait: C. A. Woolley, 1866, National Library of Australia (ref: an23378504).}
Only at the end of the fifteenth century did Europe become a hothouse of important military, political, economic and cultural developments. Between 1500 and 1750, western Europe gained momentum and became master of the ‘Outer World’, meaning the two American continents and the oceans. Yet even then Europe was no match for the great powers of Asia. Europeans managed to conquer America and gain supremacy at sea mainly because the Asiatic powers showed little interest in them. The early modern era was a golden age for the Ottoman Empire in the Mediterranean, the Safavid Empire in Persia, the Mughal Empire in India, and the Chinese Ming and Qing dynasties. They expanded their territories significantly and enjoyed unprecedented demographic and economic growth. In 1775 Asia accounted for 80 per cent of the world economy. The combined economies of India and China alone represented two-thirds of global production. In comparison, Europe was an economic dwarf.3
The global centre of power shifted to Europe only between 1750 and 1850, when Europeans humiliated the Asian powers in a series of wars and conquered large parts of Asia. By 1900 Europeans firmly controlled the world’s economy and most of its territory. In 1950 western Europe and the United States together accounted for more than half of global production, whereas China’s portion had been reduced to 5 per cent.4 Under the European aegis a new global order and global culture emerged. Today all humans are, to a much greater extent than they usually want to admit, European in dress, thought and taste. They may be fiercely anti-European in their rhetoric, but almost everyone on the planet views politics, medicine, war and economics through European eyes, and listens to music written in European modes with words in European languages. Even today’s burgeoning Chinese economy, which may soon regain its global primacy, is built on a European model of production and finance.
How did the people of this frigid finger of Eurasia manage to break out of their remote corner of the globe and conquer the entire world? Europe’s scientists are often given much of the credit. It’s unquestionable that from 1850 onward European domination rested to a large extent on the military–industrial–scientific complex and technological wizardry. All successful late modern empires cultivated scientific research in the hope of harvesting technological innovations, and many scientists spent most of their time working on arms, medicines and machines for their imperial masters. A common saying among European soldiers facing African enemies was, ‘Come what may, we have machine guns, and they don’t.’ Civilian technologies were no less important. Canned food fed soldiers, railroads and steamships transported soldiers and their provisions, while a new arsenal of medicines cured soldiers, sailors and locomotive engineers. These logistical advances played a more significant role in the European conquest of Africa than did the machine gun.
But that wasn’t the case before 1850. The military–industrial–scientific complex was still in its infancy; the technological fruits of the Scientific Revolution were unripe; and the technological gap between European, Asiatic and African powers was small. In 1770, James Cook certainly had far better technology than the Australian Aborigines, but so did the Chinese and the Ottomans. Why then was Australia explored and colonised by Captain James Cook and not by Captain Wan Zhengse or Captain Hussein Pasha? More importantly, if in 1770 Europeans had no significant technological advantage over Muslims, Indians and Chinese, how did they manage in the following century to open such a gap between themselves and the rest of the world?
Why did the military–industrial–scientific complex blossom in Europe rather than India? When Britain leaped forward, why were France, Germany and the United States quick to follow, whereas China lagged behind? When the gap between industrial and non-industrial nations became an obvious economic and political factor, why did Russia, Italy and Austria succeed in closing it, whereas Persia, Egypt and the Ottoman Empire failed? After all, the technology of the first industrial wave was relatively simple. Was it so hard for Chinese or Ottomans to engineer steam engines, manufacture machine guns and lay down railroads?
The world’s first commercial railroad opened for business in 1830, in Britain. By 1850, Western nations were criss-crossed by almost 25,000 miles of railroads – but in the whole of Asia, Africa and Latin America there were only 2,500 miles of tracks. In 1880, the West boasted more than 220,000 miles of railroads, whereas in the rest of the world there were but 22,000 miles of train lines (and most of these were laid by the British in India).5 The first railroad in China opened only in 1876. It was 15 miles long and built by Europeans – the Chinese government destroyed it the following year. In 1880 the Chinese Empire did not operate a single railroad. The first railroad in Persia was built only in 1888, and it connected Tehran with a Muslim holy site about 6 miles south of the capital. It was constructed and operated by a Belgian company. In 1950, the total railway network of Persia still amounted to a meagre 1,500 miles, in a country seven times the size of Britain.6
The Chinese and Persians did not lack technological inventions such as steam engines (which could be freely copied or bought). They lacked the values, myths, judicial apparatus and sociopolitical structures that took centuries to form and mature in the West and which could not be copied and internalised rapidly. France and the United States quickly followed in Britain’s footsteps because the French and Americans already shared the most important British myths and social structures. The Chinese and Persians could not catch up as quickly because they thought and organised their societies differently.
This explanation sheds new light on the period from 1500 to 1850. During this era Europe did not enjoy any obvious technological, political, military or economic advantage over the Asian powers, yet the continent built up a unique potential, whose importance suddenly became obvious around 1850. The apparent equality between Europe, China and the Muslim world in 1750 was a mirage. Imagine two builders, each busy constructing very tall towers. One builder uses wood and mud bricks, whereas the other uses steel and concrete. At first it seems that there is not much of a difference between the two methods, since both towers grow at a similar pace and reach a similar height. However, once a critical threshold is crossed, the wood and mud tower cannot stand the strain and collapses, whereas the steel and concrete tower grows storey by storey, as far as the eye can see.
What potential did Europe develop in the early modern period that enabled it to dominate the late modern world? There are two complementary answers to this question: modern science and capitalism. Europeans were used to thinking and behaving in a scientific and capitalist way even before they enjoyed any significant technological advantages. When the technological bonanza began, Europeans could harness it far better than anybody else. So it is hardly coincidental that science and capitalism form the most important legacy that European imperialism has bequeathed the post-European world of the twenty-first century. Europe and Europeans no longer rule the world, but science and capital are growing ever stronger. The victories of capitalism are examined in the following chapter. This chapter is dedicated to the love story between European imperialism and modern science.
Modern science flourished in and thanks to European empires. The discipline obviously owes a huge debt to ancient scientific traditions, such as those of classical Greece, China, India and Islam, yet its unique character began to take shape only in the early modern period, hand in hand with the imperial expansion of Spain, Portugal, Britain, France, Russia and the Netherlands. During the early modern period, Chinese, Indians, Muslims, Native Americans and Polynesians continued to make important contributions to the Scientific Revolution. The insights of Muslim economists were studied by Adam Smith and Karl Marx, treatments pioneered by Native American doctors found their way into English medical texts and data extracted from Polynesian informants revolutionised Western anthropology. But until the mid-twentieth century, the people who collated these myriad scientific discoveries, creating scientific disciplines in the process, were the ruling and intellectual elites of the global European empires. The Far East and the Islamic world produced minds as intelligent and curious as those of Europe. However, between 1500 and 1950 they did not produce anything that comes even close to Newtonian physics or Darwinian biology.
This does not mean that Europeans have a unique gene for science, or that they will forever dominate the study of physics and biology. Just as Islam began as an Arab monopoly but was subsequently taken over by Turks and Persians, so modern science began as a European speciality, but is today becoming a multi-ethnic enterprise.
What forged the historical bond between modern science and European imperialism? Technology was an important factor in the nineteenth and twentieth centuries, but in the early modern era it was of limited importance. The key factor was that the plant-seeking botanist and the colony-seeking naval officer shared a similar mindset. Both scientist and conqueror began by admitting ignorance – they both said, ‘I don’t know what’s out there.’ They both felt compelled to go out and make new discoveries. And they both hoped the new knowledge thus acquired would make them masters of the world.
European imperialism was entirely unlike all other imperial projects in history. Previous seekers of empire tended to assume that they already understood the world. Conquest merely utilised and spread their view of the world. The Arabs, to name one example, did not conquer Egypt, Spain or India in order to discover something they did not know. The Romans, Mongols and Aztecs voraciously conquered new lands in search of power and wealth – not of knowledge. In contrast, European imperialists set out to distant shores in the hope of obtaining new knowledge along with new territories.
James Cook was not the first explorer to think this way. The Portuguese and Spanish voyagers of the fifteenth and sixteenth centuries already did. Prince Henry the Navigator and Vasco da Gama explored the coasts of Africa and, while doing so, seized control of islands and harbours. Christopher Columbus ‘discovered’ America and immediately claimed sovereignty over the new lands for the kings of Spain. Ferdinand Magellan found a way around the world, and simultaneously laid the foundation for the Spanish conquest of the Philippines.
As time went by, the conquest of knowledge and the conquest of territory became ever more tightly intertwined. In the eighteenth and nineteenth centuries, almost every important military expedition that left Europe for distant lands had on board scientists who set out not to fight but to make scientific discoveries. When Napoleon invaded Egypt in 1798, he took 165 scholars with him. Among other things, they founded an entirely new discipline, Egyptology, and made important contributions to the study of religion, linguistics and botany.
In 1831, the Royal Navy sent the ship HMS Beagle to map the coasts of South America, the Falklands Islands and the Galapagos Islands. The navy needed this knowledge in order to tighten Britain’s imperial grip over South America. The ship’s captain, who was an amateur scientist, decided to add a geologist to the expedition to study geological formations they might encounter on the way. After several professional geologists refused his invitation, the captain offered the job to a twenty-two-year-old Cambridge graduate, Charles Darwin. Darwin had studied to become an Anglican parson but was far more interested in geology and natural sciences than in the Bible. He jumped at the opportunity, and the rest is history. The captain spent his time on the voyage drawing military maps while Darwin collected the empirical data and formulated the insights that would eventually become the theory of evolution.
On 20 July 1969, Neil Armstrong and Buzz Aldrin landed on the surface of the moon. In the months leading up to their expedition, the Apollo II astronauts trained in a remote moon-like desert in the western United States. The area is home to several Native American communities, and there is a story – or legend – describing an encounter between the astronauts and one of the locals.
One day as they were training, the astronauts came across an old Native American. The man asked them what they were doing there. They replied that they were part of a research expedition that would shortly travel to explore the moon. When the old man heard that, he fell silent for a few moments, and then asked the astronauts if they could do him a favour.
‘What do you want?’ they asked.
‘Well,’ said the old man, ‘the people of my tribe believe that holy spirits live on the moon. I was wondering if you could pass an important message to them from my people.’
‘What’s the message?’ asked the astronauts.
The man uttered something in his tribal language, and then asked the astronauts to repeat it again and again until they had memorised it correctly.
‘What does it mean?’ asked the astronauts.
‘Oh, I cannot tell you. It’s a secret that only our tribe and the moon spirits are allowed to know.’
When they returned to their base, the astronauts searched and searched until they found someone who could speak the tribal language, and asked him to translate the secret message. When they repeated what they had memorised, the translator started to laugh uproariously. When he calmed down, the astronauts asked him what it meant. The man explained that the sentence they had memorised so carefully said, ‘Don’t believe a single word these people are telling you. They have come to steal your lands.’
The modern ‘explore and conquer’ mentality is nicely illustrated by the development of world maps. Many cultures drew world maps long before the modern age. Obviously, none of them really knew the whole of the world. No Afro-Asian culture knew about America, and no American culture knew about Afro-Asia. But unfamiliar areas were simply left out, or filled with imaginary monsters and wonders. These maps had no empty spaces. They gave the impression of a familiarity with the entire world.
During the fifteenth and sixteenth centuries, Europeans began to draw world maps with lots of empty spaces – one indication of the development of the scientific mindset, as well as of the European imperial drive. The empty maps were a psychological and ideological breakthrough, a clear admission that Europeans were ignorant of large parts of the world.
The crucial turning point came in 1492, when Christopher Columbus sailed westward from Spain, seeking a new route to East Asia. Columbus still believed in the old ‘complete’ world maps. Using them, Columbus calculated that Japan should have been located about 4,375 miles west of Spain. In fact, more than 12,500 miles and an entire unknown continent separate East Asia from Spain. On 12 October 1492, at about 2:00 A.M., Columbus’ expedition collided with the unknown continent. Juan Rodriguez Bermejo, watching from the mast of the ship Pinta, spotted an island in what we now call the Bahamas, and shouted ‘Land! Land!’
Columbus believed he had reached a small island off the East Asian coast. He called the people he found there ‘Indians’ because he thought he had landed in the Indies – what we now call the East Indies or the Indonesian archipelago. Columbus stuck to this error for the rest of his life. The idea that he had discovered a completely unknown continent was inconceivable for him and for many of his generation. For thousands of years, not only the greatest thinkers and scholars but also the infallible Scriptures had known only Europe, Africa and Asia. Could they all have been wrong? Could the Bible have missed half the world? It would be as if in 1969, on its way to the moon, Apollo II had crashed into a hitherto unknown moon circling the earth, which all previous observations had somehow failed to spot. In his refusal to admit ignorance, Columbus was still a medieval man. He was convinced he knew the whole world, and even his momentous discovery failed to convince him otherwise.
36. A European world map from 1459 (Europe is in the top left corner). The map is filled with details, even when depicting areas that were completely unfamiliar to Europeans, such as southern Africa.
{© British Library Board (shelfmark add. 11267).}
The first modern man was Amerigo Vespucci, an Italian sailor who took part in several expeditions to America in the years 1499–1504. Between 1502 and 1504, two texts describing these expeditions were published in Europe. They were attributed to Vespucci. These texts argued that the new lands discovered by Columbus were not islands off the East Asian coast, but rather an entire continent unknown to the Scriptures, classical geographers and contemporary Europeans. In 1507, convinced by these arguments, a respected mapmaker named Martin Waldseemüller published an updated world map, the first to show the place where Europe’s westward-sailing fleets had landed as a separate continent. Having drawn it, Waldseemüller had to give it a name. Erroneously believing that Amerigo Vespucci had been the person who discovered it, Waldseemüller named the continent in his honour – America. The Waldseemüller map became very popular and was copied by many other cartographers, spreading the name he had given the new land. There is poetic justice in the fact that a quarter of the world, and two of its seven continents, are named after a little-known Italian whose sole claim to fame is that he had the courage to say, ‘We don’t know.’
The discovery of America was the foundational event of the Scientific Revolution. It not only taught Europeans to favour present observations over past traditions, but the desire to conquer America also obliged Europeans to search for new knowledge at breakneck speed. If they really wanted to control the vast new territories, they had to gather enormous amounts of new data about the geography, climate, flora, fauna, languages, cultures and history of the new continent. Christian Scriptures, old geography books and ancient oral traditions were of little help.
Henceforth not only European geographers, but European scholars in almost all other fields of knowledge began to draw maps with spaces left to fill in. They began to admit that their theories were not perfect and that there were important things that they did not know.
The Europeans were drawn to the blank spots on the map as if they were magnets, and promptly started filling them in. During the fifteenth and sixteenth centuries, European expeditions circumnavigated Africa, explored America, crossed the Pacific and Indian Oceans, and created a network of bases and colonies all over the world. They established the first truly global empires and knitted together the first global trade network. The European imperial expeditions transformed the history of the world: from being a series of histories of isolated peoples and cultures, it became the history of a single integrated human society.
37. The Salviati World Map, 1525. While the 1459 world map is full of continents, islands and detailed explanations, the Salviati map is mostly empty. The eye wanders south along the American coastline, until it peters into emptiness. Anyone looking at the map and possessing even minimal curiosity is tempted to ask, ‘What’s beyond this point?’ The map gives no answers. It invites the observer to set sail and find out.
{© Firenze, Biblioteca Medicea Laurenziana, Ms. Laur. Med. Palat. 249 (mappa Salviati).}
These European explore-and-conquer expeditions are so familiar to us that we tend to overlook just how extraordinary they were. Nothing like them had ever happened before. Long-distance campaigns of conquest are not a natural undertaking. Throughout history most human societies were so busy with local conflicts and neighbourhood quarrels that they never considered exploring and conquering distant lands. Most great empires extended their control only over their immediate neighbourhood – they reached far-flung lands simply because their neighbourhood kept expanding. Thus the Romans conquered Etruria in order to defend Rome (c.350–300 BC). They then conquered the Po Valley in order to defend Etruria (c.200 BC). They subsequently conquered Provence to defend the Po Valley (c.120 BC), Gaul to defend Provence (c.50 BC), and Britain in order to defend Gaul (c. AD 50). It took them 400 years to get from Rome to London. In 350 BC, no Roman would have conceived of sailing directly to Britain and conquering it.
Occasionally an ambitious ruler or adventurer would embark on a long-range campaign of conquest, but such campaigns usually followed well-beaten imperial or commercial paths. The campaigns of Alexander the Great, for example, did not result in the establishment of a new empire, but rather in the usurpation of an existing empire – that of the Persians. The closest precedents to the modern European empires were the ancient naval empires of Athens and Carthage, and the medieval naval empire of Majapahit, which held sway over much of Indonesia in the fourteenth century. Yet even these empires rarely ventured into unknown seas – their naval exploits were local undertakings when compared to the global ventures of the modern Europeans.
Many scholars argue that the voyages of Admiral Zheng He of the Chinese Ming dynasty heralded and eclipsed the European voyages of discovery. Between 1405 and 1433, Zheng led seven huge armadas from China to the far reaches of the Indian Ocean. The largest of these comprised almost 300 ships and carried close to 30,000 people.7 They visited Indonesia, Sri Lanka, India, the Persian Gulf, the Red Sea and East Africa. Chinese ships anchored in Jedda, the main harbour of the Hejaz, and in Malindi, on the Kenyan coast. Columbus’ fleet of 1492 – which consisted of three small ships manned by 120 sailors – was like a trio of mosquitoes compared to Zheng He’s drove of dragons.8
Yet there was a crucial difference. Zheng He explored the oceans, and assisted pro-Chinese rulers, but he did not try to conquer or colonise the countries he visited. Moreover, the expeditions of Zheng He were not deeply rooted in Chinese politics and culture. When the ruling faction in Beijing changed during the 1430s, the new overlords abruptly terminated the operation. The great fleet was dismantled, crucial technical and geographical knowledge was lost, and no explorer of such stature and means ever set out again from a Chinese port. Chinese rulers in the coming centuries, like most Chinese rulers in previous centuries, restricted their interests and ambitions to the Middle Kingdom’s immediate environs.
The Zheng He expeditions prove that Europe did not enjoy an outstanding technological edge. What made Europeans exceptional was their unparalleled and insatiable ambition to explore and conquer. Although they might have had the ability, the Romans never attempted to conquer India or Scandinavia, the Persians never attempted to conquer Madagascar or Spain, and the Chinese never attempted to conquer Indonesia or Africa. Most Chinese rulers left even nearby Japan to its own devices. There was nothing peculiar about that. The oddity is that early modern Europeans caught a fever that drove them to sail to distant and completely unknown lands full of alien cultures, take one step on to their beaches, and immediately declare, ‘I claim all these territories for my king!’
38. Zheng He’s flagship next to that of Columbus.
{Illustration © Neil Gower.}
Around 1517, Spanish colonists in the Caribbean islands began to hear vague rumours about a powerful empire somewhere in the centre of the Mexican mainland. A mere four years later, the Aztec capital was a smouldering ruin, the Aztec Empire was a thing of the past, and Hernán Cortés lorded over a vast new Spanish Empire in Mexico.
The Spaniards did not stop to congratulate themselves or even to catch their breath. They immediately commenced explore-and-conquer operations in all directions. The previous rulers of Central America – the Aztecs, the Toltecs, the Maya – barely knew South America existed, and never made any attempt to subjugate it, over the course of 2,000 years. Yet within little more than ten years of the Spanish conquest of Mexico, Francisco Pizarro had discovered the Inca Empire in South America, vanquishing it in 1532.
Had the Aztecs and Incas shown a bit more interest in the world surrounding them – and had they known what the Spaniards had done to their neighbours – they might have resisted the Spanish conquest more keenly and successfully. In the years separating Columbus’ first journey to America (1492) from the landing of Cortés in Mexico (1519), the Spaniards conquered most of the Caribbean islands, setting up a chain of new colonies. For the subjugated natives, these colonies were hell on earth. They were ruled with an iron fist by greedy and unscrupulous colonists who enslaved them and set them to work in mines and plantations, killing anyone who offered the slightest resistance. Most of the native population soon died, either because of the harsh working conditions or the virulence of the diseases that hitch-hiked to America on the conquerors’ sailing ships. Within twenty years, almost the entire native Caribbean population was wiped out. The Spanish colonists began importing African slaves to fill the vacuum.
This genocide took place on the very doorstep of the Aztec Empire, yet when Cortés landed on the empire’s eastern coast, the Aztecs knew nothing about it. The coming of the Spaniards was the equivalent of an alien invasion from outer space. The Aztecs were convinced that they knew the entire world and that they ruled most of it. To them it was unimaginable that outside their domain could exist anything like these Spaniards. When Cortés and his men landed on the sunny beaches of today’s Vera Cruz, it was the first time the Aztecs encountered a completely unknown people.
The Aztecs did not know how to react. They had trouble deciding what these strangers were. Unlike all known humans, the aliens had white skins. They also had lots of facial hair. Some had hair the colour of the sun. They stank horribly. (Native hygiene was far better than Spanish hygiene. When the Spaniards first arrived in Mexico, natives bearing incense burners were assigned to accompany them wherever they went. The Spaniards thought it was a mark of divine honour. We know from native sources that they found the newcomers’ smell unbearable.)
Map 7. The Aztec and Inca empires at the time of the Spanish conquest.
{Maps by Neil Gower}
The aliens’ material culture was even more bewildering. They came in giant ships, the like of which the Aztecs had never imagined, let alone seen. They rode on the back of huge and terrifying animals, swift as the wind. They could produce lightning and thunder out of shiny metal sticks. They had flashing long swords and impenetrable armour, against which the natives’ wooden swords and flint spears were useless.
Some Aztecs thought these must be gods. Others argued that they were demons, or the ghosts of the dead, or powerful sorcerers. Instead of concentrating all available forces and wiping out the Spaniards, the Aztecs deliberated, dawdled and negotiated. They saw no reason to rush. After all, Cortés had no more than 550 Spaniards with him. What could 550 men do to an empire of millions?
Cortés was equally ignorant about the Aztecs, but he and his men held significant advantages over their adversaries. While the Aztecs had no experience to prepare them for the arrival of these strange-looking and foul-smelling aliens, the Spaniards knew that the earth was full of unknown human realms, and no one had greater expertise in invading alien lands and dealing with situations about which they were utterly ignorant. For the modern European conqueror, like the modern European scientist, plunging into the unknown was exhilarating.
So when Cortés anchored off that sunny beach in July 1519, he did not hesitate to act. Like a science-fiction alien emerging from his spaceship, he declared to the awestruck locals: ‘We come in peace. Take us to your leader.’ Cortés explained that he was a peaceful emissary from the great king of Spain, and asked for a diplomatic interview with the Aztec ruler, Montezuma II. (This was a shameless lie. Cortés led an independent expedition of greedy adventurers. The king of Spain had never heard of Cortés, nor of the Aztecs.) Cortés was given guides, food and some military assistance by local enemies of the Aztecs. He then marched towards the Aztec capital, the great metropolis of Tenochtitlan.
The Aztecs allowed the aliens to march all the way to the capital, then respectfully led the aliens’ leader to meet Emperor Montezuma. In the middle of the interview, Cortés gave a signal, and steel-armed Spaniards butchered Montezuma’s bodyguards (who were armed only with wooden clubs, and stone blades). The honoured guest took his host prisoner.
Cortés was now in a very delicate situation. He had captured the emperor, but was surrounded by tens of thousands of furious enemy warriors, millions of hostile civilians, and an entire continent about which he knew practically nothing. He had at his disposal only a few hundred Spaniards, and the closest Spanish reinforcements were in Cuba, more than a thousand miles away.
Cortés kept Montezuma captive in the palace, making it look as if the king remained free and in charge and as if the ‘Spanish ambassador’ were no more than a guest. The Aztec Empire was an extremely centralised polity, and this unprecedented situation paralysed it. Montezuma continued to behave as if he ruled the empire, and the Aztec elite continued to obey him, which meant they obeyed Cortés. This situation lasted for several months, during which time Cortés interrogated Montezuma and his attendants, trained translators in a variety of local languages, and sent small Spanish expeditions in all directions to become familiar with the Aztec Empire and the various tribes, peoples and cities that it ruled.
The Aztec elite eventually revolted against Cortés and Montezuma, elected a new emperor, and drove the Spaniards from Tenochtitlan. However, by now numerous cracks had appeared in the imperial edifice. Cortés used the knowledge he had gained to prise the cracks open wider and split the empire from within. He convinced many of the empire’s subject peoples to join him against the ruling Aztec elite. The subject peoples miscalculated badly. They hated the Aztecs, but knew nothing of Spain or the Caribbean genocide. They assumed that with Spanish help they could shake off the Aztec yoke. The idea that the Spanish would take over never occurred to them. They were sure that if Cortés and his few hundred henchmen caused any trouble, they could easily be overwhelmed. The rebellious peoples provided Cortés with an army of tens of thousands of local troops, and with its help Cortés besieged Tenochtitlan and conquered the city.
At this stage more and more Spanish soldiers and settlers began arriving in Mexico, some from Cuba, others all the way from Spain. When the local peoples realised what was happening, it was too late. Within a century of the landing at Vera Cruz, the native population of the Americas had shrunk by about 90 per cent, due mainly to unfamiliar diseases that reached America with the invaders. The survivors found themselves under the thumb of a greedy and racist regime that was far worse than that of the Aztecs.
Ten years after Cortés landed in Mexico, Pizarro arrived on the shore of the Inca Empire. He had far fewer soldiers than Cortés – his expedition numbered just 168 men! Yet Pizarro benefited from all the knowledge and experience gained in previous invasions. The Inca, in contrast, knew nothing about the fate of the Aztecs. Pizarro plagiarised Cortés. He declared himself a peaceful emissary from the king of Spain, invited the Inca ruler, Atahualpa, to a diplomatic interview, and then kidnapped him. Pizarro proceeded to conquer the paralysed empire with the help of local allies. If the subject peoples of the Inca Empire had known the fate of the inhabitants of Mexico, they would not have thrown in their lot with the invaders. But they did not know.
The native peoples of America were not the only ones to pay a heavy price for their parochial outlook. The great empires of Asia – the Ottoman, the Safavid, the Mughal and the Chinese – very quickly heard that the Europeans had discovered something big. Yet they displayed little interest in these discoveries. They continued to believe that the world revolved around Asia, and made no attempt to compete with the Europeans for control of America or of the new ocean lanes in the Atlantic and the Pacific. Even puny European kingdoms such as Scotland and Denmark sent a few explore-and-conquer expeditions to America, but not one expedition of either exploration or conquest was ever sent to America from the Islamic world, India or China. The first non-European power that tried to send a military expedition to America was Japan. That happened in June 1942, when a Japanese expedition conquered Kiska and Attu, two small islands off the Alaskan coast, capturing in the process ten US soldiers and a dog. The Japanese never got any closer to the mainland.
It is hard to argue that the Ottomans or Chinese were too far away, or that they lacked the technological, economic or military wherewithal. The resources that sent Zheng He from China to East Africa in the 1420s should have been enough to reach America. The Chinese just weren’t interested. The first Chinese world map to show America was not issued until 1602 – and then by a European missionary!
For 300 years, Europeans enjoyed undisputed mastery in America and Oceania, in the Atlantic and the Pacific. The only significant struggles in those regions were between different European powers. The wealth and resources accumulated by the Europeans eventually enabled them to invade Asia too, defeat its empires, and divide it among themselves. When the Ottomans, Persians, Indians and Chinese woke up and began paying attention, it was too late.
Only in the twentieth century did non-European cultures adopt a truly global vision. This was one of the crucial factors that led to the collapse of European hegemony. Thus in the Algerian War of Independence (1954–62), Algerian guerrillas defeated a French army with an overwhelming numerical, technological and economic advantage. The Algerians prevailed because they were supported by a global anti-colonial network, and because they worked out how to harness the world’s media to their cause – as well as public opinion in France itself. The defeat that little North Vietnam inflicted on the American colossus was based on a similar strategy. These guerrilla forces showed that even superpowers could be defeated if a local struggle became a global cause. It is interesting to contemplate what might have happened had Montezuma been able to manipulate public opinion in Spain and gain assistance from one of Spain’s rivals – Portugal, France or the Ottoman Empire.
Modern science and modern empires were motivated by the restless feeling that perhaps something important awaited beyond the horizon – something they had better explore and master. Yet the connection between science and empire went much deeper. Not just the motivation, but also the practices of empire-builders were entangled with those of scientists. For modern Europeans, building an empire was a scientific project, while setting up a scientific discipline was an imperial project.
When the Muslims conquered India, they did not bring along archaeologists to systematically study Indian history, anthropologists to study Indian cultures, geologists to study Indian soils, or zoologists to study Indian fauna. When the British conquered India, they did all of these things. On 10 April 1802 the Great Survey of India was launched. It lasted sixty years. With the help of tens of thousands of native labourers, scholars and guides, the British carefully mapped the whole of India, marking borders, measuring distances, and even calculating for the first time the exact height of Mount Everest and the other Himalayan peaks. The British explored the military resources of Indian provinces and the location of their gold mines, but they also took the trouble to collect information about rare Indian spiders, to catalogue colourful butterflies, to trace the ancient origins of extinct Indian languages, and to dig up forgotten ruins.
Mohenjo-daro was one of the chief cities of the Indus Valley civilisation, which flourished in the third millennium BC and was destroyed around 1900 BC. None of India’s pre-British rulers – neither the Mauryas, nor the Guptas, nor the Delhi sultans, nor the great Mughals – had given the ruins a second glance. But a British archaeological survey took notice of the site in 1922. A British team then excavated it, and discovered the first great civilisation of India, which no Indian had been aware of.
Another telling example of British scientific curiosity was the deciphering of cuneiform script. This was the main script used throughout the Middle East for close to 3,000 years, but the last person able to read it probably died sometime in the early first millennium AD. Since then, inhabitants of the region frequently encountered cuneiform inscriptions on monuments, steles, ancient ruins and broken pots. But they had no idea how to read the weird, angular scratches and, as far as we know, they never tried. Cuneiform came to the attention of Europeans in 1618, when the Spanish ambassador in Persia went sightseeing in the ruins of ancient Persepolis, where he saw inscriptions that nobody could explain to him. News of the unknown script spread among European savants and piqued their curiosity. In 1657 European scholars published the first transcription of a cuneiform text from Persepolis. More and more transcriptions followed, and for close to two centuries scholars in the West tried to decipher them. None succeeded.
In the 1830s, a British officer named Henry Rawlinson was sent to Persia to help the shah train his army in the European style. In his spare time Rawlinson travelled around Persia and one day he was led by local guides to a cliff in the Zagros Mountains and shown the huge Behistun Inscription. About fifty feet high and eighty feet wide, it had been etched high up on the cliff face on the command of King Darius I sometime around 500 BC. It was written in cuneiform script in three languages: Old Persian, Elamite and Babylonian. The inscription was well known to the local population, but nobody could read it. Rawlinson became convinced that if he could decipher the writing it would enable him and other scholars to read the numerous inscriptions and texts that were at the time being discovered all over the Middle East, opening a door into an ancient and forgotten world.
The first step in deciphering the lettering was to produce an accurate transcription that could be sent back to Europe. Rawlinson defied death to do so, scaling the steep cliff to copy the strange letters. He hired several locals to help him, most notably a Kurdish boy who climbed to the most inaccessible parts of the cliff in order to copy the upper portion of the inscription. In 1847 the project was completed, and a full and accurate copy was sent to Europe.
Rawlinson did not rest on his laurels. As an army officer, he had military and political missions to carry out, but whenever he had a spare moment he puzzled over the secret script. He tried one method after another and finally managed to decipher the Old Persian part of the inscription. This was easiest, since Old Persian was not that different from modern Persian, which Rawlinson knew well. An understanding of the Old Persian section gave him the key he needed to unlock the secrets of the Elamite and Babylonian sections. The great door swung open, and out came a rush of ancient but lively voices – the bustle of Sumerian bazaars, the proclamations of Assyrian kings, the arguments of Babylonian bureaucrats. Without the efforts of modern European imperialists such as Rawlinson, we would not have known much about the fate of the ancient Middle Eastern empires.
Another notable imperialist scholar was William Jones. Jones arrived in India in September 1783 to serve as a judge in the Supreme Court of Bengal. He was so captivated by the wonders of India that within less than six months of his arrival he had founded the Asiatic Society. This academic organisation was devoted to studying the cultures, histories and societies of Asia, and in particular those of India. Within two years Jones published his observations on the Sanskrit language, which pioneered the science of comparative linguistics.
In his publications Jones pointed out surprising similarities between Sanskrit, an ancient Indian language that became the sacred tongue of Hindu ritual, and the Greek and Latin languages, as well as similarities between all these languages and Gothic, Celtic, Old Persian, German, French and English. Thus in Sanskrit, ‘mother’ is ‘matar’, in Latin it is ‘mater’, and in Old Celtic it is ‘mathir’. Jones surmised that all these languages must share a common origin, developing from a now-forgotten ancient ancestor. He was thus the first to identify what later came to be called the Indo-European family of languages.
Jones’ study was an important milestone not merely due to his bold (and accurate) hypotheses, but also because of the orderly methodology that he developed to compare languages. It was adopted by other scholars, enabling them systematically to study the development of all the world’s languages.
Linguistics received enthusiastic imperial support. The European empires believed that in order to govern effectively they must know the languages and cultures of their subjects. British officers arriving in India were supposed to spend up to three years in a Calcutta college, where they studied Hindu and Muslim law alongside English law; Sanskrit, Urdu and Persian alongside Greek and Latin; and Tamil, Bengali and Hindustani culture alongside mathematics, economics and geography. The study of linguistics provided invaluable help in understanding the structure and grammar of local languages.
Thanks to the work of people like William Jones and Henry Rawlinson, the European conquerors knew their empires very well. Far better, indeed, than any previous conquerors, or even than the native population itself. Their superior knowledge had obvious practical advantages. Without such knowledge, it is unlikely that a ridiculously small number of Britons could have succeeded in governing, oppressing and exploiting so many hundreds of millions of Indians for two centuries. Throughout the nineteenth and early twentieth centuries, fewer than 5,000 British officials, about 40,000–70,000 British soldiers, and perhaps another 100,000 British business people, hangers-on, wives and children were sufficient to conquer and rule up to 300 million Indians.9
Yet these practical advantages were not the only reason why empires financed the study of linguistics, botany, geography and history. No less important was the fact that science gave the empires ideological justification. Modern Europeans came to believe that acquiring new knowledge was always good. The fact that the empires produced a constant stream of new knowledge branded them as progressive and positive enterprises. Even today, histories of sciences such as geography, archaeology and botany cannot avoid crediting the European empires, at least indirectly. Histories of botany have little to say about the suffering of the Aboriginal Australians, but they usually find some kind words for James Cook and Joseph Banks.
Furthermore, the new knowledge accumulated by the empires made it possible, at least in theory, to benefit the conquered populations and bring them the benefits of ‘progress’ – to provide them with medicine and education, to build railroads and canals, to ensure justice and prosperity. Imperialists claimed that their empires were not vast enterprises of exploitation but rather altruistic projects conducted for the sake of the non-European races – in Rudyard Kipling’s words, ‘the White Man’s burden’:
Take up the White Man’s burden –
Send forth the best ye breed –
Go bind your sons to exile
To serve your captives’ need;
To wait in heavy harness,
On fluttered folk and wild –
Your new-caught, sullen peoples,
Half-devil and half-child.
Of course, the facts often belied this myth. The British conquered Bengal, the richest province of India, in 1764. The new rulers were interested in little except enriching themselves. They adopted a disastrous economic policy that a few years later led to the outbreak of the Great Bengal Famine. It began in 1769, reached catastrophic levels in 1770, and lasted until 1773. About 10 million Bengalis, a third of the province’s population, died in the calamity.10
In truth, neither the narrative of oppression and exploitation nor that of ‘The White Man’s Burden’ completely matches the facts. The European empires did so many different things on such a large scale, that you can find plenty of examples to support whatever you want to say about them. You think that these empires were evil monstrosities that spread death, oppression and injustice around the world? You could easily fill an encyclopedia with their crimes. You want to argue that they in fact improved the conditions of their subjects with new medicines, better economic conditions and greater security? You could fill another encyclopedia with their achievements. Due to their close cooperation with science, these empires wielded so much power and changed the world to such an extent that perhaps they cannot be simply labelled as good or evil. They created the world as we know it, including the ideologies we use in order to judge them.
But science was also used by imperialists to more sinister ends. Biologists, anthropologists and even linguists provided scientific proof that Europeans are superior to all other races, and consequently have the right (if not perhaps the duty) to rule over them. After William Jones argued that all Indo-European languages descend from a single ancient language many scholars were eager to discover who the speakers of that language had been. They noticed that the earliest Sanskrit speakers, who had invaded India from Central Asia more than 3,000 years ago, had called themselves Arya. The speakers of the earliest Persian language called themselves Airiia. European scholars consequently surmised that the people who spoke the primordial language that gave birth to both Sanskrit and Persian (as well as to Greek, Latin, Gothic and Celtic) must have called themselves Aryans. Could it be a coincidence that those who founded the magnificent Indian, Persian, Greek and Roman civilisations were all Aryans?
Next, British, French and German scholars wedded the linguistic theory about the industrious Aryans to Darwin’s theory of natural selection and posited that the Aryans were not just a linguistic group but a biological entity – a race. And not just any race, but a master race of tall, light-haired, blue-eyed, hard-working, and super-rational humans who emerged from the mists of the north to lay the foundations of culture throughout the world. Regrettably, the Aryans who invaded India and Persia intermarried with the local natives they found in these lands, losing their light complexions and blond hair, and with them their rationality and diligence. The civilisations of India and Persia consequently declined. In Europe, on the other hand, the Aryans preserved their racial purity. This is why Europeans had managed to conquer the world, and why they were fit to rule it – provided they took precautions not to mix with inferior races.
Such racist theories, prominent and respectable for many decades, have become anathema among scientists and politicians alike. People continue to conduct a heroic struggle against racism without noticing that the battlefront has shifted, and that the place of racism in imperial ideology has now been replaced by ‘culturism’. There is no such word, but it’s about time we coined it. Among today’s elites, assertions about the contrasting merits of diverse human groups are almost always couched in terms of historical differences between cultures rather than biological differences between races. We no longer say, ‘It’s in their blood.’ We say, ‘It’s in their culture.’
Thus European right-wing parties which oppose Muslim immigration usually take care to avoid racial terminology. Marine le Pen’s speechwriters would have been shown the door on the spot had they suggested that the leader of France’s Front National party go on television to declare that, ‘We don’t want those inferior Semites to dilute our Aryan blood and spoil our Aryan civilisation.’ Instead, the French Front National, the Dutch Party for Freedom, the Alliance for the Future of Austria and their like tend to argue that Western culture, as it has evolved in Europe, is characterised by democratic values, tolerance and gender equality, whereas Muslim culture, which evolved in the Middle East, is characterised by hierarchical politics, fanaticism and misogyny. Since the two cultures are so different, and since many Muslim immigrants are unwilling (and perhaps unable) to adopt Western values, they should not be allowed to enter, lest they foment internal conflicts and corrode European democracy and liberalism.
Such culturist arguments are fed by scientific studies in the humanities and social sciences that highlight the so-called clash of civilisations and the fundamental differences between different cultures. Not all historians and anthropologists accept these theories or support their political usages. But whereas biologists today have an easy time disavowing racism, simply explaining that the biological differences between present-day human populations are trivial, it is harder for historians and anthropologists to disavow culturism. After all, if the differences between human cultures are trivial, why should we pay historians and anthropologists to study them?
Scientists have provided the imperial project with practical knowledge, ideological justification and technological gadgets. Without this contribution it is highly questionable whether Europeans could have conquered the world. The conquerors returned the favour by providing scientists with information and protection, supporting all kinds of strange and fascinating projects and spreading the scientific way of thinking to the far corners of the earth. Without imperial support, it is doubtful whether modern science would have progressed very far. There are very few scientific disciplines that did not begin their lives as servants to imperial growth and that do not owe a large proportion of their discoveries, collections, buildings and scholarships to the generous help of army officers, navy captains and imperial governors.
This is obviously not the whole story. Science was supported by other institutions, not just by empires. And the European empires rose and flourished thanks also to factors other than science. Behind the meteoric rise of both science and empire lurks one particularly important force: capitalism. Were it not for businessmen seeking to make money, Columbus would not have reached America, James Cook would not have reached Australia, and Neil Armstrong would never have taken that small step on the surface of the moon.
MONEY HAS BEEN ESSENTIAL BOTH FOR building empires and for promoting science. But is money the ultimate goal of these undertakings, or perhaps just a dangerous necessity?
It is not easy to grasp the true role of economics in modern history. Whole volumes have been written about how money founded states and ruined them, opened new horizons and enslaved millions, moved the wheels of industry and drove hundreds of species into extinction. Yet to understand modern economic history, you really need to understand just a single word. The word is growth. For better or worse, in sickness and in health, the modern economy has been growing like a hormone-soused teenager. It eats up everything it can find and puts on inches faster than you can count.
For most of history the economy stayed much the same size. Yes, global production increased, but this was due mostly to demographic expansion and the settlement of new lands. Per capita production remained static. But all that changed in the modern age. In 1500, global production of goods and services was equal to about $250 billion; today it hovers around $60 trillion. More importantly, in 1500, annual per capita production averaged $550, while today every man, woman and child produces, on the average, $8,800 a year.1 What accounts for this stupendous growth?
Economics is a notoriously complicated subject. To make things easier, let’s imagine a simple example.
Samuel Greedy, a shrewd financier, founds a bank in El Dorado, California.
A. A. Stone, an up-and-coming contractor in El Dorado, finishes his first big job, receiving payment in cash to the tune of $1 million. He deposits this sum in Mr Greedy’s bank. The bank now has $1 million in capital.
In the meantime, Jane McDoughnut, an experienced but impecunious El Dorado chef, thinks she sees a business opportunity – there’s no really good bakery in her part of town. But she doesn’t have enough money of her own to buy a proper facility complete with industrial ovens, sinks, knives and pots. She goes to the bank, presents her business plan to Greedy, and persuades him that it’s a worthwhile investment. He issues her a $1 million loan, by crediting her account in the bank with that sum.
McDoughnut now hires Stone, the contractor, to build and furnish her bakery. His price is $1,000,000.
When she pays him, with a cheque drawn on her account, Stone deposits it in his account in the Greedy bank.
So how much money does Stone have in his bank account? Right, $2 million.
How much money, cash, is actually located in the bank’s safe? Yes, $1 million.
It doesn’t stop there. As contractors are wont to do, two months into the job Stone informs McDoughnut that, due to unforeseen problems and expenses, the bill for constructing the bakery will actually be $2 million. Mrs McDoughnut is not pleased, but she can hardly stop the job in the middle. So she pays another visit to the bank, convinces Mr Greedy to give her an additional loan, and he puts another $1 million in her account. She transfers the money to the contractor’s account.
How much money does Stone have in his account now? He’s got $3 million.
But how much money is actually sitting in the bank? Still just $1 million. In fact, the same $1 million that’s been in the bank all along.
Current US banking law permits the bank to repeat this exercise seven more times. The contractor would eventually have $10 million in his account, even though the bank still has but $1 million in its vaults. Banks are allowed to loan $10 for every dollar they actually possess, which means that 90 percent of all the money in our bank accounts is not covered by actual coins and notes.2 If all of the account holders at Barclays Bank suddenly demand their money, Barclays will promptly collapse (unless the government steps in to save it). The same is true of Lloyds, Deutsche Bank, Citibank, and all other banks in the world.
It sounds like a giant Ponzi scheme, doesn’t it? But if it’s a fraud, then the entire modern economy is a fraud. The fact is, it’s not a deception, but rather a tribute to the amazing abilities of the human imagination. What enables banks – and the entire economy – to survive and flourish is our trust in the future. This trust is the sole backing for most of the money in the world.
In the bakery example, the discrepancy between the contractor’s account statement and the amount of money actually in the bank is Mrs McDoughnut’s bakery. Mr Greedy has put the bank’s money into the asset, trusting that one day it would be profitable. The bakery hasn’t baked a loaf of bread yet, but McDoughnut and Greedy anticipate that a year hence it will be selling thousands of loaves, rolls, cakes and cookies each day, at a handsome profit. Mrs McDoughnut will then be able to repay her loan, with interest. If at that point Mr Stone decides to withdraw his savings, Greedy will be able to come up with the cash. The entire enterprise is thus founded on trust in an imaginary future – the trust that the entrepreneur and the banker have in the bakery of their dreams, along with the contractor’s trust in the future solvency of the bank.
We’ve already seen that money is an astounding thing because it can represent myriad different objects and convert anything into almost anything else. However, before the modern era this ability was limited. In most cases, money could represent and convert only things that actually existed in the present. This imposed a severe limitation on growth, since it made it very hard to finance new enterprises.
Consider our bakery again. Could McDoughnut get it built if money could represent only tangible objects? No. In the present, she has a lot of dreams, but no tangible resources. The only way she could get her bakery built would be to find a contractor willing to work today and receive payment in a few years’ time, if and when the bakery starts making money. Alas, such contractors are rare breeds. So our entrepreneur is in a bind. Without a bakery, she can’t bake cakes. Without cakes, she can’t make money. Without money, she can’t hire a contractor. Without a contractor, she has no bakery.
Humankind was trapped in this predicament for thousands of years. As a result, economies remained frozen. The way out of the trap was discovered only in the modern era, with the appearance of a new system based on trust in the future. In it, people agreed to represent imaginary goods – goods that do not exist in the present – with a special kind of money they called ‘credit’. Credit enables us to build the present at the expense of the future. It’s founded on the assumption that our future resources are sure to be far more abundant than our present resources. A host of new and wonderful opportunities open up if we can build things in the present using future income.
If credit is such a wonderful thing, why did nobody think of it earlier? Of course they did. Credit arrangements of one kind or another have existed in all known human cultures, going back at least to ancient Sumer. The problem in previous eras was not that no one had the idea or knew how to use it. It was that people seldom wanted to extend much credit because they didn’t trust that the future would be better than the present. They generally believed that times past had been better than their own times and that the future would be worse, or at best much the same. To put that in economic terms, they believed that the total amount of wealth was limited, if not dwindling. People therefore considered it a bad bet to assume that they personally, or their kingdom, or the entire world, would be producing more wealth ten years down the line. Business looked like a zero-sum game. Of course, the profits of one particular bakery might rise, but only at the expense of the bakery next door. Venice might flourish, but only by impoverishing Genoa. The king of England might enrich himself, but only by robbing the king of France. You could cut the pie in many different ways, but it never got any bigger.
That’s why many cultures concluded that making bundles of money was sinful. As Jesus said, ‘It is easier for a camel to pass through the eye of a needle than for a rich man to enter into the kingdom of God’ (Matthew 19:24). If the pie is static, and I have a big part of it, then I must have taken somebody else’s slice. The rich were obliged to do penance for their evil deeds by giving some of their surplus wealth to charity.
The Entrepreneur’s Dilemma
If the global pie stayed the same size, there was no margin for credit. Credit is the difference between today’s pie and tomorrow’s pie. If the pie stays the same, why extend credit? It would be an unacceptable risk unless you believed that the baker or king asking for your money might be able to steal a slice from a competitor. So it was hard to get a loan in the premodern world, and when you got one it was usually small, short-term, and subject to high interest rates. Upstart entrepreneurs thus found it difficult to open new bakeries and great kings who wanted to build palaces or wage wars had no choice but to raise the necessary funds through high taxes and tariffs. That was fine for kings (as long as their subjects remained docile), but a scullery maid who had a great idea for a bakery and wanted to move up in the world generally could only dream of wealth while scrubbing down the royal kitchen’s floors.
The Magic Circle of the Modern Economy
It was lose-lose. Because credit was limited, people had trouble financing new businesses. Because there were few new businesses, the economy did not grow. Because it did not grow, people assumed it never would, and those who had capital were wary of extending credit. The expectation of stagnation fulfilled itself.
Then came the Scientific Revolution and the idea of progress. The idea of progress is built on the notion that if we admit our ignorance and invest resources in research, things can improve. This idea was soon translated into economic terms. Whoever believes in progress believes that geographical discoveries, technological inventions and organisational developments can increase the sum total of human production, trade and wealth. New trade routes in the Atlantic could flourish without ruining old routes in the Indian Ocean. New goods could be produced without reducing the production of old ones. For instance, one could open a new bakery specialising in chocolate cakes and croissants without causing bakeries specialising in bread to go bust. Everybody would simply develop new tastes and eat more. I can be wealthy without your becoming poor; I can be obese without your dying of hunger. The entire global pie can grow.
Over the last 500 years the idea of progress convinced people to put more and more trust in the future. This trust created credit; credit brought real economic growth; and growth strengthened the trust in the future and opened the way for even more credit. It didn’t happen overnight – the economy behaved more like a roller coaster than a balloon. But over the long run, with the bumps evened out, the general direction was unmistakable. Today, there is so much credit in the world that governments, business corporations and private individuals easily obtain large, long-term and low-interest loans that far exceed current income.
The Economic History of the World in a Nutshell
The belief in the growing global pie eventually turned revolutionary. In 1776 the Scottish economist Adam Smith published The Wealth of Nations, probably the most important economics manifesto of all time. In the eighth chapter of its first volume, Smith made the following novel argument: when a landlord, a weaver, or a shoemaker has greater profits than he needs to maintain his own family, he uses the surplus to employ more assistants, in order to further increase his profits. The more profits he has, the more assistants he can employ. It follows that an increase in the profits of private entrepreneurs is the basis for the increase in collective wealth and prosperity.
This may not strike you as very original, because we all live in a capitalist world that takes Smith’s argument for granted. We hear variations on this theme every day in the news. Yet Smith’s claim that the selfish human urge to increase private profits is the basis for collective wealth is one of the most revolutionary ideas in human history – revolutionary not just from an economic perspective, but even more so from a moral and political perspective. What Smith says is, in fact, that greed is good, and that by becoming richer I benefit everybody, not just myself. Egoism is altruism.
Smith taught people to think about the economy as a ‘win-win situation’, in which my profits are also your profits. Not only can we both enjoy a bigger slice of pie at the same time, but the increase in your slice depends upon the increase in my slice. If I am poor, you too will be poor since I cannot buy your products or services. If I am rich, you too will be enriched since you can now sell me something. Smith denied the traditional contradiction between wealth and morality, and threw open the gates of heaven for the rich. Being rich meant being moral. In Smith’s story, people become rich not by despoiling their neighbours, but by increasing the overall size of the pie. And when the pie grows, everyone benefits. The rich are accordingly the most useful and benevolent people in society, because they turn the wheels of growth for everyone’s advantage.
All this depends, however, on the rich using their profits to open new factories and hire new employees, rather than wasting them on non-productive activities. Smith therefore repeated like a mantra the maxim that ‘When profits increase, the landlord or weaver will employ more assistants’ and not ‘When profits increase, Scrooge will hoard his money in a chest and take it out only to count his coins.’ A crucial part of the modern capitalist economy was the emergence of a new ethic, according to which profits ought to be reinvested in production. This brings about more profits, which are again reinvested in production, which brings more profits, et cetera ad infinitum. Investments can be made in many ways: enlarging the factory, conducting scientific research, developing new products. Yet all these investments must somehow increase production and translate into larger profits. In the new capitalist creed, the first and most sacred commandment is: ‘The profits of production must be reinvested in increasing production.’
That’s why capitalism is called ‘capitalism’. Capitalism distinguishes ‘capital’ from mere ‘wealth’. Capital consists of money, goods and resources that are invested in production. Wealth, on the other hand, is buried in the ground or wasted on unproductive activities. A pharaoh who pours resources into a non-productive pyramid is not a capitalist. A pirate who loots a Spanish treasure fleet and buries a chest full of glittering coins on the beach of some Caribbean island is not a capitalist. But a hard-working factory hand who reinvests part of his income in the stock market is.
The idea that ‘The profits of production must be reinvested in increasing production’ sounds trivial. Yet it was alien to most people throughout history. In premodern times, people believed that production was more or less constant. So why reinvest your profits if production won’t increase by much, no matter what you do? Thus medieval noblemen espoused an ethic of generosity and conspicuous consumption. They spent their revenues on tournaments, banquets, palaces and wars, and on charity and monumental cathedrals. Few tried to reinvest profits in increasing their manors’ output, developing better kinds of wheat, or looking for new markets.
In the modern era, the nobility has been overtaken by a new elite whose members are true believers in the capitalist creed. The new capitalist elite is made up not of dukes and marquises, but of board chairmen, stock traders and industrialists. These magnates are far richer than the medieval nobility, but they are far less interested in extravagant consumption, and they spend a much smaller part of their profits on non-productive activities.
Medieval noblemen wore colourful robes of gold and silk, and devoted much of their time to attending banquets, carnivals and glamorous tournaments. In comparison, modern CEOs don dreary uniforms called suits that afford them all the panache of a flock of crows, and they have little time for festivities. The typical venture capitalist rushes from one business meeting to another, trying to figure out where to invest his capital and following the ups and downs of the stocks and bonds he owns. True, his suits might be Versace and he might get to travel in a private jet, but these expenses are nothing compared to what he invests in increasing human production.
It’s not just Versace-clad business moguls who invest to increase productivity. Ordinary folk and government agencies think along similar lines. How many dinner conversations in modest neighbourhoods sooner or later bog down in interminable debate about whether it is better to invest one’s savings in the stock market, bonds or property? Governments too strive to invest their tax revenues in productive enterprises that will increase future income – for example, building a new port could make it easier for factories to export their products, enabling them to make more taxable income, thereby increasing the government’s future revenues. Another government might prefer to invest in education, on the grounds that educated people form the basis for the lucrative high-tech industries, which pay lots of taxes without needing extensive port facilities.
Capitalism began as a theory about how the economy functions. It was both descriptive and prescriptive – it offered an account of how money worked and promoted the idea that reinvesting profits in production leads to fast economic growth. But capitalism gradually became far more than just an economic doctrine. It now encompasses an ethic – a set of teachings about how people should behave, educate their children and even think. Its principal tenet is that economic growth is the supreme good, or at least a proxy for the supreme good, because justice, freedom and even happiness all depend on economic growth. Ask a capitalist how to bring justice and political freedom to a place like Zimbabwe or Afghanistan, and you are likely to get a lecture on how economic affluence and a thriving middle class are essential for stable democratic institutions, and about the need therefore to inculcate Afghan tribesmen in the values of free enterprise, thrift and self-reliance.
This new religion has had a decisive influence on the development of modern science, too. Scientific research is usually funded by either governments or private businesses. When capitalist governments and businesses consider investing in a particular scientific project, the first questions are usually, ‘Will this project enable us to increase production and profits? Will it produce economic growth?’ A project that can’t clear these hurdles has little chance of finding a sponsor. No history of modern science can leave capitalism out of the picture.
Conversely, the history of capitalism is unintelligible without taking science into account. Capitalism’s belief in perpetual economic growth flies in the face of almost everything we know about the universe. A society of wolves would be extremely foolish to believe that the supply of sheep would keep on growing indefinitely. The human economy has nevertheless managed to keep on growing throughout the modern era, thanks only to the fact that scientists come up with another discovery or gadget every few years – such as the continent of America, the internal combustion engine, or genetically engineered sheep. Banks and governments print money, but ultimately, it is the scientists who foot the bill.
Over the last few years, banks and governments have been frenziedly printing money. Everybody is terrified that the current economic crisis may stop the growth of the economy. So they are creating trillions of dollars, euros and yen out of thin air, pumping cheap credit into the system, and hoping that the scientists, technicians and engineers will manage to come up with something really big, before the bubble bursts. Everything depends on the people in the labs. New discoveries in fields such as biotechnology and nanotechnology could create entire new industries, whose profits could back the trillions of make-believe money that the banks and governments have created since 2008. If the labs do not fulfil these expectations before the bubble bursts, we are heading towards very rough times.
Capitalism played a decisive role not only in the rise of modern science, but also in the emergence of European imperialism. And it was European imperialism that created the capitalist credit system in the first place. Of course, credit was not invented in modern Europe. It existed in almost all agricultural societies, and in the early modern period the emergence of European capitalism was closely linked to economic developments in Asia. Remember, too, that until the late eighteenth century, Asia was the world’s economic powerhouse, meaning that Europeans had far less capital at their disposal than the Chinese, Muslims or Indians.
However, in the sociopolitical systems of China, India and the Muslim world, credit played only a secondary role. Merchants and bankers in the markets of Istanbul, Isfahan, Delhi and Beijing may have thought along capitalist lines, but the kings and generals in the palaces and forts tended to despise merchants and mercantile thinking. Most non-European empires of the early modern era were established by great conquerors such as Nurhaci and Nader Shah, or by bureaucratic and military elites as in the Qing and Ottoman empires. Financing wars through taxes and plunder (without making fine distinctions between the two), they owed little to credit systems, and they cared even less about the interests of bankers and investors.
In Europe, on the other hand, kings and generals gradually adopted the mercantile way of thinking, until merchants and bankers became the ruling elite. The European conquest of the world was increasingly financed through credit rather than taxes, and was increasingly directed by capitalists whose main ambition was to receive maximum returns on their investments. The empires built by bankers and merchants in frock coats and top hats defeated the empires built by kings and noblemen in gold clothes and shining armour. The mercantile empires were simply much shrewder in financing their conquests. Nobody wants to pay taxes, but everyone is happy to invest.
In 1484 Christopher Columbus approached the king of Portugal with the proposal that he finance a fleet that would sail westward to find a new trade route to East Asia. Such explorations were a very risky and costly business. A lot of money was needed in order to build ships, buy supplies, and pay sailors and soldiers – and there was no guarantee that the investment would yield a return. The king of Portugal declined.
Like a present-day start-up entrepreneur, Columbus did not give up. He pitched his idea to other potential investors in Italy, France, England, and again in Portugal. Each time he was rejected. He then tried his luck with Ferdinand and Isabella, rulers of newly united Spain. He took on some experienced lobbyists, and with their help he managed to convince Queen Isabella to invest. As every schoolchild knows, Isabella hit the jackpot. Columbus’ discoveries enabled the Spaniards to conquer America, where they established gold and silver mines as well as sugar and tobacco plantations that enriched the Spanish kings, bankers and merchants beyond their wildest dreams.
A hundred years later, princes and bankers were willing to extend far more credit to Columbus’ successors, and they had more capital at their disposal, thanks to the treasures reaped from America. Equally important, princes and bankers had far more trust in the potential of exploration, and were more willing to part with their money. This was the magic circle of imperial capitalism: credit financed new discoveries; discoveries led to colonies; colonies provided profits; profits built trust; and trust translated into more credit. Nurhaci and Nader Shah ran out of fuel after a few thousand miles. Capitalist entrepreneurs only increased their financial momentum from conquest to conquest.
But these expeditions remained chancy affairs, so credit markets nevertheless remained quite cautious. Many expeditions returned to Europe empty-handed, having discovered nothing of value. The English, for instance, wasted a lot of capital in fruitless attempts to discover a north-western passage to Asia through the Arctic. Many other expeditions didn’t return at all. Ships hit icebergs, foundered in tropical storms, or fell victim to pirates. In order to increase the number of potential investors and reduce the risk they incurred, Europeans turned to limited liability joint-stock companies. Instead of a single investor betting all his money on a single rickety ship, the joint-stock company collected money from a large number of investors, each risking only a small portion of his capital. The risks were thereby curtailed, but no cap was placed on the profits. Even a small investment in the right ship could turn you into a millionaire.
Decade by decade, western Europe witnessed the development of a sophisticated financial system that could raise large amounts of credit on short notice and put it at the disposal of private entrepreneurs and governments. This system could finance explorations and conquests far more efficiently than any kingdom or empire. The new-found power of credit can be seen in the bitter struggle between Spain and the Netherlands. In the sixteenth century, Spain was the most powerful state in Europe, holding sway over a vast global empire. It ruled much of Europe, huge chunks of North and South America, the Philippine Islands, and a string of bases along the coasts of Africa and Asia. Every year, fleets heavy with American and Asian treasures returned to the ports of Seville and Cadiz. The Netherlands was a small and windy swamp, devoid of natural resources, a small corner of the king of Spain’s dominions.
In 1568 the Dutch, who were mainly Protestant, revolted against their Catholic Spanish overlord. At first the rebels seemed to play the role of Don Quixote, courageously tilting at invincible windmills. Yet within eighty years the Dutch had not only secured their independence from Spain, but had managed to replace the Spaniards and their Portuguese allies as masters of the ocean highways, build a global Dutch empire, and become the richest state in Europe.
The secret of Dutch success was credit. The Dutch burghers, who had little taste for combat on land, hired mercenary armies to fight the Spanish for them. The Dutch themselves meanwhile took to the sea in ever-larger fleets. Mercenary armies and cannon-brandishing fleets cost a fortune, but the Dutch were able to finance their military expeditions more easily than the mighty Spanish Empire because they secured the trust of the burgeoning European financial system at a time when the Spanish king was carelessly eroding its trust in him. Financiers extended the Dutch enough credit to set up armies and fleets, and these armies and fleets gave the Dutch control of world trade routes, which in turn yielded handsome profits. The profits allowed the Dutch to repay the loans, which strengthened the trust of the financiers. Amsterdam was fast becoming not only one of the most important ports of Europe, but also the continent’s financial Mecca.
How exactly did the Dutch win the trust of the financial system? Firstly, they were sticklers about repaying their loans on time and in full, making the extension of credit less risky for lenders. Secondly, their country’s judicial system enjoyed independence and protected private rights – in particular private property rights. Capital trickles away from dictatorial states that fail to defend private individuals and their property. Instead, it flows into states upholding the rule of law and private property.
Imagine that you are the son of a solid family of German financiers. Your father sees an opportunity to expand the business by opening branches in major European cities. He sends you to Amsterdam and your younger brother to Madrid, giving you each 10,000 gold coins to invest. Your brother lends his start-up capital at interest to the king of Spain, who needs it to raise an army to fight the king of France. You decide to lend yours to a Dutch merchant, who wants to invest in scrubland on the southern end of a desolate island called Manhattan, certain that property values there will skyrocket as the Hudson River turns into a major trade artery. Both loans are to be repaid within a year.
The year passes. The Dutch merchant sells the land he’s bought at a handsome markup and repays your money with the interest he promised. Your father is pleased. But your little brother in Madrid is getting nervous. The war with France ended well for the king of Spain, but he has now embroiled himself in a conflict with the Turks. He needs every penny to finance the new war, and thinks this is far more important than repaying old debts. Your brother sends letters to the palace and asks friends with connections at court to intercede, but to no avail. Not only has your brother not earned the promised interest – he’s lost the principal. Your father is not pleased.
Now, to make matters worse, the king sends a treasury official to your brother to tell him, in no uncertain terms, that he expects to receive another loan of the same size, forthwith. Your brother has no money to lend. He writes home to Dad, trying to persuade him that this time the king will come through. The paterfamilias has a soft spot for his youngest, and agrees with a heavy heart. Another 10,000 gold coins disappear into the Spanish treasury, never to be seen again. Meanwhile in Amsterdam, things are looking bright. You make more and more loans to enterprising Dutch merchants, who repay them promptly and in full. But your luck does not hold indefinitely. One of your usual clients has a hunch that wooden clogs are going to be the next fashion craze in Paris, and asks you for a loan to set up a footwear emporium in the French capital. You lend him the money, but unfortunately the clogs don’t catch on with the French ladies, and the disgruntled merchant refuses to repay the loan.
Your father is furious, and tells both of you it is time to unleash the lawyers. Your brother files suit in Madrid against the Spanish monarch, while you file suit in Amsterdam against the erstwhile wooden-shoe wizard. In Spain, the law courts are subservient to the king – the judges serve at his pleasure and fear punishment if they do not do his will. In the Netherlands, the courts are a separate branch of government, not dependent on the country’s burghers and princes. The court in Madrid throws out your brother’s suit, while the court in Amsterdam finds in your favour and puts a lien on the clog-merchant’s assets to force him to pay up. Your father has learned his lesson. Better to do business with merchants than with kings, and better to do it in Holland than in Madrid.
And your brother’s travails are not over. The king of Spain desperately needs more money to pay his army. He’s sure that your father has cash to spare. So he brings trumped-up treason charges against your brother. If he doesn’t come up with 20,000 gold coins forthwith, he’ll get cast into a dungeon and rot there until he dies.
Your father has had enough. He pays the ransom for his beloved son, but swears never to do business in Spain again. He closes his Madrid branch and relocates your brother to Rotterdam. Two branches in Holland now look like a really good idea. He hears that even Spanish capitalists are smuggling their fortunes out of their country. They, too, realise that if they want to keep their money and use it to gain more wealth, they are better off investing it where the rule of law prevails and where private property is respected – in the Netherlands, for example.
In such ways did the king of Spain squander the trust of investors at the same time that Dutch merchants gained their confidence. And it was the Dutch merchants – not the Dutch state – who built the Dutch Empire. The king of Spain kept on trying to finance and maintain his conquests by raising unpopular taxes from a disgruntled populace. The Dutch merchants financed conquest by getting loans, and increasingly also by selling shares in their companies that entitled their holders to receive a portion of the company’s profits. Cautious investors who would never have given their money to the king of Spain, and who would have thought twice before extending credit to the Dutch government, happily invested fortunes in the Dutch joint-stock companies that were the mainstay of the new empire.
If you thought a company was going to make a big profit but it had already sold all its shares, you could buy some from people who owned them, probably for a higher price than they originally paid. If you bought shares and later discovered that the company was in dire straits, you could try to unload your stock for a lower price. The resulting trade in company shares led to the establishment in most major European cities of stock exchanges, places where the shares of companies were traded.
The most famous Dutch joint-stock company, the Vereenigde Oostindische Compagnie, or VOC for short, was chartered in 1602, just as the Dutch were throwing off Spanish rule and the boom of Spanish artillery could still be heard not far from Amsterdam’s ramparts. VOC used the money it raised from selling shares to build ships, send them to Asia, and bring back Chinese, Indian and Indonesian goods. It also financed military actions taken by company ships against competitors and pirates. Eventually VOC money financed the conquest of Indonesia.
Indonesia is the world’s biggest archipelago. Its thousands upon thousands of islands were ruled in the early seventeenth century by hundreds of kingdoms, principalities, sultanates and tribes. When VOC merchants first arrived in Indonesia in 1603, their aims were strictly commercial. However, in order to secure their commercial interests and maximise the profits of the shareholders, VOC merchants began to fight against local potentates who charged inflated tariffs, as well as against European competitors. VOC armed its merchant ships with cannons; it recruited European, Japanese, Indian and Indonesian mercenaries; and it built forts and conducted full-scale battles and sieges. This enterprise may sound a little strange to us, but in the early modern age it was common for private companies to hire not only soldiers, but also generals and admirals, cannons and ships, and even entire off-the-shelf armies. The international community took this for granted and didn’t raise an eyebrow when a private company established an empire.
Island after island fell to VOC mercenaries and a large part of Indonesia became a VOC colony. VOC ruled Indonesia for close to 200 years. Only in 1800 did the Dutch state assume control of Indonesia, making it a Dutch national colony for the following 150 years. Today some people warn that twenty-first-century corporations are accumulating too much power. Early modern history shows just how far that can go if businesses are allowed to pursue their self-interest unchecked.
While VOC operated in the Indian Ocean, the Dutch West Indies Company, or WIC, plied the Atlantic. In order to control trade on the important Hudson River, WIC built a settlement called New Amsterdam on an island at the river’s mouth. The colony was threatened by Indians and repeatedly attacked by the British, who eventually captured it in 1664. The British changed its name to New York. The remains of the wall built by WIC to defend its colony against Indians and British are today paved over by the world’s most famous street – Wall Street.
As the seventeenth century wound to an end, complacency and costly continental wars caused the Dutch to lose not only New York, but also their place as Europe’s financial and imperial engine. The vacancy was hotly contested by France and Britain. At first France seemed to be in a far stronger position. It was bigger than Britain, richer, more populous, and it possessed a larger and more experienced army. Yet Britain managed to win the trust of the financial system whereas France proved itself unworthy. The behaviour of the French crown was particularly notorious during what was called the Mississippi Bubble, the largest financial crisis of eighteenth-century Europe. That story also begins with an empire-building joint-stock company.
In 1717 the Mississippi Company, chartered in France, set out to colonise the lower Mississippi valley, establishing the city of New Orleans in the process. To finance its ambitious plans, the company, which had good connections at the court of King Louis XV, sold shares on the Paris stock exchange. John Law, the company’s director, was also the governor of the central bank of France. Furthermore, the king had appointed him controller-general of finances, an office roughly equivalent to that of a modern finance minister. In 1717 the lower Mississippi valley offered few attractions besides swamps and alligators, yet the Mississippi Company spread tales of fabulous riches and boundless opportunities. French aristocrats, businessmen and the stolid members of the urban bourgeoisie fell for these fantasies, and Mississippi share prices skyrocketed. Initially, shares were offered at 500 livres apiece. On 1 August 1719, shares traded at 2,750 livres. By 30 August, they were worth 4,100 livres, and on 4 September, they reached 5,000 livres. On 2 December the price of a Mississippi share crossed the threshold of 10,000 livres. Euphoria swept the streets of Paris. People sold all their possessions and took huge loans in order to buy Mississippi shares. Everybody believed they’d discovered the easy way to riches.
39. New Amsterdam in 1660, at the tip of Manhattan Island. The settlement’s protective wall is today paved over by Wall Street.
{Redraft of the Castello Plan, John Wolcott Adams, 1916 © Collection of the New-York Historical Society/The Bridgeman Art Library.}
A few days later, the panic began. Some speculators realised that the share prices were totally unrealistic and unsustainable. They figured that they had better sell while stock prices were at their peak. As the supply of shares available rose, their price declined. When other investors saw the price going down, they also wanted to get out quick. The stock price plummeted further, setting off an avalanche. In order to stabilise prices, the central bank of France – at the direction of its governor, John Law – bought up Mississippi shares, but it could not do so for ever. Eventually it ran out of money. When this happened, the controller-general of finances, the same John Law, authorised the printing of more money in order to buy additional shares. This placed the entire French financial system inside the bubble. And not even this financial wizardry could save the day. The price of Mississippi shares dropped from 10,000 livres back to 1,000 livres, and then collapsed completely, and the shares lost every sou of their worth. By now, the central bank and the royal treasury owned a huge amount of worthless stock and had no money. The big speculators emerged largely unscathed – they had sold in time. Small investors lost everything, and many committed suicide.
The Mississippi Bubble was one of history’s most spectacular financial crashes. The royal French financial system never recuperated fully from the blow. The way in which the Mississippi Company used its political clout to manipulate share prices and fuel the buying frenzy caused the public to lose faith in the French banking system and in the financial wisdom of the French king. Louis XV found it more and more difficult to raise credit. This became one of the chief reasons that the overseas French Empire fell into British hands. While the British could borrow money easily and at low interest rates, France had difficulties securing loans, and had to pay high interest on them. In order to finance his growing debts, the king of France borrowed more and more money at higher and higher interest rates. Eventually, in the 1780s, Louis XVI, who had ascended to the throne on his grandfather’s death, realised that half his annual budget was tied to servicing the interest on his loans, and that he was heading towards bankruptcy. Reluctantly, in 1789, Louis XVI convened the Estates General, the French parliament that had not met for a century and a half, in order to find a solution to the crisis. Thus began the French Revolution.
While the French overseas empire was crumbling, the British Empire was expanding rapidly. Like the Dutch Empire before it, the British Empire was established and run largely by private joint-stock companies based in the London stock exchange. The first English settlements in North America were established in the early seventeenth century by joint-stock companies such as the London Company, the Plymouth Company, the Dorchester Company and the Massachusetts Company.
The Indian subcontinent too was conquered not by the British state, but by the mercenary army of the British East India Company. This company outperformed even the VOC. From its headquarters in Leadenhall Street, London, it ruled a mighty Indian empire for about a century, maintaining a huge military force of up to 350,000 soldiers, considerably outnumbering the armed forces of the British monarchy. Only in 1858 did the British crown nationalise India along with the company’s private army. Napoleon made fun of the British, calling them a nation of shopkeepers. Yet these shopkeepers defeated Napoleon himself, and their empire was the largest the world has ever seen.
The nationalisation of Indonesia by the Dutch crown (1800) and of India by the British crown (1858) hardly ended the embrace of capitalism and empire. On the contrary, the connection only grew stronger during the nineteenth century. Joint-stock companies no longer needed to establish and govern private colonies – their managers and large shareholders now pulled the strings of power in London, Amsterdam and Paris, and they could count on the state to look after their interests. As Marx and other social critics quipped, Western governments were becoming a capitalist trade union.
The most notorious example of how governments did the bidding of big money was the First Opium War, fought between Britain and China (1840–42). In the first half of the nineteenth century, the British East India Company and sundry British business people made fortunes by exporting drugs, particularly opium, to China. Millions of Chinese became addicts, debilitating the country both economically and socially. In the late 1830s the Chinese government issued a ban on drug trafficking, but British drug merchants simply ignored the law. Chinese authorities began to confiscate and destroy drug cargos. The drug cartels had close connections in Westminster and Downing Street – many MPs and Cabinet ministers in fact held stock in the drug companies – so they pressured the government to take action.
In 1840 Britain duly declared war on China in the name of ‘free trade’. It was a walkover. The overconfident Chinese were no match for Britain’s new wonder weapons – steamboats, heavy artillery, rockets and rapid-fire rifles. Under the subsequent peace treaty, China agreed not to constrain the activities of British drug merchants and to compensate them for damages inflicted by the Chinese police. Furthermore, the British demanded and received control of Hong Kong, which they proceeded to use as a secure base for drug trafficking (Hong Kong remained in British hands until 1997). In the late nineteenth century, about 40 million Chinese, a tenth of the country’s population, were opium addicts.3
Egypt, too, learned to respect the long arm of British capitalism. During the nineteenth century, French and British investors lent huge sums to the rulers of Egypt, first in order to finance the Suez Canal project, and later to fund far less successful enterprises. Egyptian debt swelled, and European creditors increasingly meddled in Egyptian affairs. In 1881 Egyptian nationalists had had enough and rebelled. They declared a unilateral abrogation of all foreign debt. Queen Victoria was not amused. A year later she dispatched her army and navy to the Nile and Egypt remained a British protectorate until after World War Two.
These were hardly the only wars fought in the interests of investors. In fact, war itself could become a commodity, just like opium. In 1821 the Greeks rebelled against the Ottoman Empire. The uprising aroused great sympathy in liberal and romantic circles in Britain – Lord Byron, the poet, even went to Greece to fight alongside the insurgents. But London financiers saw an opportunity as well. They proposed to the rebel leaders the issue of tradable Greek Rebellion Bonds on the London stock exchange. The Greeks would promise to repay the bonds, plus interest, if and when they won their independence. Private investors bought bonds to make a profit, or out of sympathy for the Greek cause, or both. The value of Greek Rebellion Bonds rose and fell on the London stock exchange in tempo with military successes and failures on the battlefields of Hellas. The Turks gradually gained the upper hand. With a rebel defeat imminent, the bondholders faced the prospect of losing their trousers. The bondholders’ interest was the national interest, so the British organised an international fleet that, in 1827, sank the main Ottoman flotilla in the Battle of Navarino. After centuries of subjugation, Greece was finally free. But freedom came with a huge debt that the new country had no way of repaying. The Greek economy was mortgaged to British creditors for decades to come.
40. The Battle of Navarino (1827).
{© National Maritime Museum, Greenwich, London.}
The bear hug between capital and politics has had far-reaching implications for the credit market. The amount of credit in an economy is determined not only by purely economic factors such as the discovery of a new oil field or the invention of a new machine, but also by political events such as regime changes or more ambitious foreign policies. After the Battle of Navarino, British capitalists were more willing to invest their money in risky overseas deals. They had seen that if a foreign debtor refused to repay loans, Her Majesty’s army would get their money back.
This is why today a country’s credit rating is far more important to its economic well-being than are its natural resources. Credit ratings indicate the probability that a country will pay its debts. In addition to purely economic data, they take into account political, social and even cultural factors. An oil-rich country cursed with a despotic government, endemic warfare and a corrupt judicial system will usually receive a low credit rating. As a result, it is likely to remain relatively poor since it will not be able to raise the necessary capital to make the most of its oil bounty. A country devoid of natural resources, but which enjoys peace, a fair judicial system and a free government is likely to receive a high credit rating. As such, it may be able to raise enough cheap capital to support a good education system and foster a flourishing high-tech industry.
Capital and politics influence each other to such an extent that their relations are hotly debated by economists, politicians and the general public alike. Ardent capitalists tend to argue that capital should be free to influence politics, but politics should not be allowed to influence capital. They argue that when governments interfere in the markets, political interests cause them to make unwise investments that result in slower growth. For example, a government may impose heavy taxation on industrialists and use the money to give lavish unemployment benefits, which are popular with voters. In the view of many business people, it would be far better if the government left the money with them. They would use it, they claim, to open new factories and hire the unemployed.
In this view, the wisest economic policy is to keep politics out of the economy, reduce taxation and government regulation to a minimum, and allow market forces free rein to take their course. Private investors, unencumbered by political considerations, will invest their money where they can get the most profit, so the way to ensure the most economic growth – which will benefit everyone, industrialists and workers – is for the government to do as little as possible. This free-market doctrine is today the most common and influential variant of the capitalist creed. The most enthusiastic advocates of the free market criticise military adventures abroad with as much zeal as welfare programmes at home. They offer governments the same advice that Zen masters offer initiates: just do nothing.
But in its extreme form, belief in the free market is as naïve as belief in Santa Claus. There simply is no such thing as a market free of all political bias. The most important economic resource is trust in the future, and this resource is constantly threatened by thieves and charlatans. Markets by themselves offer no protection against fraud, theft and violence. It is the job of political systems to ensure trust by legislating sanctions against cheats and to establish and support police forces, courts and jails which will enforce the law. When kings fail to do their jobs and regulate the markets properly, it leads to loss of trust, dwindling credit and economic depression. That was the lesson taught by the Mississippi Bubble of 1719, and anyone who forgot it was reminded by the US housing bubble of 2007, and the ensuing credit crunch and recession.
There is an even more fundamental reason why it’s dangerous to give markets a completely free rein. Adam Smith taught that the shoemaker would use his surplus to employ more assistants. This implies that egoistic greed is beneficial for all, since profits are utilised to expand production and hire more employees.
Yet what happens if the greedy shoemaker increases his profits by paying employees less and increasing their work hours? The standard answer is that the free market would protect the employees. If our shoemaker pays too little and demands too much, the best employees would naturally abandon him and go to work for his competitors. The tyrant shoemaker would find himself left with the worst labourers, or with no labourers at all. He would have to mend his ways or go out of business. His own greed would compel him to treat his employees well.
This sounds bulletproof in theory, but in practice the bullets get through all too easily. In a completely free market, unsupervised by kings and priests, avaricious capitalists can establish monopolies or collude against their workforces. If there is a single corporation controlling all shoe factories in a country, or if all factory owners conspire to reduce wages simultaneously, then the labourers are no longer able to protect themselves by switching jobs.
Even worse, greedy bosses might curtail the workers’ freedom of movement through debt peonage or slavery. At the end of the Middle Ages, slavery was almost unknown in Christian Europe. During the early modern period, the rise of European capitalism went hand in hand with the rise of the Atlantic slave trade. Unrestrained market forces, rather than tyrannical kings or racist ideologues, were responsible for this calamity.
When the Europeans conquered America, they opened gold and silver mines and established sugar, tobacco and cotton plantations. These mines and plantations became the mainstay of American production and export. The sugar plantations were particularly important. In the Middle Ages, sugar was a rare luxury in Europe. It was imported from the Middle East at prohibitive prices and used sparingly as a secret ingredient in delicacies and snake-oil medicines. After large sugar plantations were established in America, ever-increasing amounts of sugar began to reach Europe. The price of sugar dropped and Europe developed an insatiable sweet tooth. Entrepreneurs met this need by producing huge quantities of sweets: cakes, cookies, chocolate, candy, and sweetened beverages such as cocoa, coffee and tea. The annual sugar intake of the average Englishman rose from near zero in the early seventeenth century to around eighteen pounds in the early nineteenth century.
However, growing cane and extracting its sugar was a labour-intensive business. Few people wanted to work long hours in malaria-infested sugar fields under a tropical sun. Contract labourers would have produced a commodity too expensive for mass consumption. Sensitive to market forces, and greedy for profits and economic growth, European plantation owners switched to slaves.
From the sixteenth to the nineteenth centuries, about 10 million African slaves were imported to America. About 70 per cent of them worked on the sugar plantations. Labour conditions were abominable. Most slaves lived a short and miserable life, and millions more died during wars waged to capture slaves or during the long voyage from inner Africa to the shores of America. All this so that Europeans could enjoy their sweet tea and candy – and sugar barons could enjoy huge profits.
The slave trade was not controlled by any state or government. It was a purely economic enterprise, organised and financed by the free market according to the laws of supply and demand. Private slave-trading companies sold shares on the Amsterdam, London and Paris stock exchanges. Middle-class Europeans looking for a good investment bought these shares. Relying on this money, the companies bought ships, hired sailors and soldiers, purchased slaves in Africa, and transported them to America. There they sold the slaves to the plantation owners, using the proceeds to purchase plantation products such as sugar, cocoa, coffee, tobacco, cotton and rum. They returned to Europe, sold the sugar and cotton for a good price, and then sailed to Africa to begin another round. The shareholders were very pleased with this arrangement. Throughout the eighteenth century the yield on slave-trade investments was about 6 per cent a year – they were extremely profitable, as any modern consultant would be quick to admit.
This is the fly in the ointment of free-market capitalism. It cannot ensure that profits are gained in a fair way, or distributed in a fair manner. On the contrary, the craving to increase profits and production blinds people to anything that might stand in the way. When growth becomes a supreme good, unrestricted by any other ethical considerations, it can easily lead to catastrophe. Some religions, such as Christianity and Nazism, have killed millions out of burning hatred. Capitalism has killed millions out of cold indifference coupled with greed. The Atlantic slave trade did not stem from racist hatred towards Africans. The individuals who bought the shares, the brokers who sold them, and the managers of the slave-trade companies rarely thought about the Africans. Nor did the owners of the sugar plantations. Many owners lived far from their plantations, and the only information they demanded were neat ledgers of profits and losses.
It is important to remember that the Atlantic slave trade was not a single aberration in an otherwise spotless record. The Great Bengal Famine, discussed in the previous chapter, was caused by a similar dynamic – the British East India Company cared more about its profits than about the lives of 10 million Bengalis. VOC’s military campaigns in Indonesia were financed by upstanding Dutch burghers who loved their children, gave to charity, and enjoyed good music and fine art, but had no regard for the suffering of the inhabitants of Java, Sumatra and Malacca. Countless other crimes and misdemeanours accompanied the growth of the modern economy in other parts of the planet.
The nineteenth century brought no improvement in the ethics of capitalism. The Industrial Revolution that swept through Europe enriched the bankers and capital-owners, but condemned millions of workers to a life of abject poverty. In the European colonies things were even worse. In 1876, King Leopold II of Belgium set up a non-governmental humanitarian organisation with the declared aim of exploring Central Africa and fighting the slave trade along the Congo River. It was also charged with improving conditions for the inhabitants of the region by building roads, schools and hospitals. In 1885 the European powers agreed to give this organisation control of 1.4 million square miles in the Congo basin. This territory, seventy-five times the size of Belgium, was henceforth known as the Congo Free State. Nobody asked the opinion of the territory’s 20–30 million inhabitants.
Within a short time the humanitarian organisation became a business enterprise whose real aim was growth and profit. The schools and hospitals were forgotten, and the Congo basin was instead filled with mines and plantations, run by mostly Belgian officials who ruthlessly exploited the local population. The rubber industry was particularly notorious. Rubber was fast becoming an industrial staple, and rubber export was the Congo’s most important source of income. The African villagers who collected the rubber were required to provide higher and higher quotas. Those who failed to deliver their quota were punished brutally for their ‘laziness’. Their arms were chopped off and occasionally entire villages were massacred. According to the most moderate estimates, between 1885 and 1908 the pursuit of growth and profits cost the lives of 6 million individuals (at least 20 per cent of the Congo’s population). Some estimates reach up to 10 million deaths.4
After 1908, and especially after 1945, capitalist greed was somewhat reined in, not least due to the fear of Communism. Yet inequities are still rampant. The economic pie of 2014 is far larger than the pie of 1500, but it is distributed so unevenly that many African peasants and Indonesian labourers return home after a hard day’s work with less food than did their ancestors 500 years ago. Much like the Agricultural Revolution, so too the growth of the modern economy might turn out to be a colossal fraud. The human species and the global economy may well keep growing, but many more individuals may live in hunger and want.
Capitalism has two answers to this criticism. First, capitalism has created a world that nobody but a capitalist is capable of running. The only serious attempt to manage the world differently – Communism – was so much worse in almost every conceivable way that nobody has the stomach to try again. In 8500 BC one could cry bitter tears over the Agricultural Revolution, but it was too late to give up agriculture. Similarly, we may not like capitalism, but we cannot live without it.
The second answer is that we just need more patience – paradise, the capitalists promise, is right around the corner. True, mistakes have been made, such as the Atlantic slave trade and the exploitation of the European working class. But we have learned our lesson, and if we just wait a little longer and allow the pie to grow a little bigger, everybody will receive a fatter slice. The division of spoils will never be equitable, but there will be enough to satisfy every man, woman and child – even in the Congo.
There are, indeed, some positive signs. At least when we use purely material criteria – such as life expectancy, child mortality and calorie intake – the standard of living of the average human in 2014 is significantly higher than it was in 1914, despite the exponential growth in the number of humans.
Yet can the economic pie grow indefinitely? Every pie requires raw materials and energy. Prophets of doom warn that sooner or later Homo sapiens will exhaust the raw materials and energy of planet Earth. And what will happen then?
THE MODERN ECONOMY GROWS THANKS to our trust in the future and to the willingness of capitalists to reinvest their profits in production. Yet that does not suffice. Economic growth also requires energy and raw materials, and these are finite. When and if they run out, the entire system will collapse.
But the evidence provided by the past is that they are finite only in theory. Counter-intuitively, while humankind’s use of energy and raw materials has mushroomed in the last few centuries, the amounts available for our exploitation have actually increased. Whenever a shortage of either has threatened to slow economic growth, investments have flowed into scientific and technological research. These have invariably produced not only more efficient ways of exploiting existing resources, but also completely new types of energy and materials.
Consider the vehicle industry. Over the last 300 years, humankind has manufactured billions of vehicles – from carts and wheelbarrows, to trains, cars, supersonic jets and space shuttles. One might have expected that such a prodigious effort would have exhausted the energy sources and raw materials available for vehicle production, and that today we would be scraping the bottom of the barrel. Yet the opposite is the case. Whereas in 1700 the global vehicle industry relied overwhelmingly on wood and iron, today it has at its disposal a cornucopia of new-found materials such as plastic, rubber, aluminium and titanium, none of which our ancestors even knew about. Whereas in 1700 carts were built mainly by the muscle power of carpenters and smiths, today the machines in Toyota and Boeing factories are powered by petroleum combustion engines and nuclear power stations. A similar revolution has swept almost all other fields of industry. We call it the Industrial Revolution.
For millennia prior to the Industrial Revolution, humans already knew how to make use of a large variety of energy sources. They burned wood in order to smelt iron, heat houses and bake cakes. Sailing ships harnessed wind power to move around, and watermills captured the flow of rivers to grind grain. Yet all these had clear limits and problems. Trees were not available everywhere, the wind didn’t always blow when you needed it, and water power was only useful if you lived near a river.
An even bigger problem was that people didn’t know how to convert one type of energy into another. They could harness the movement of wind and water to sail ships and push millstones, but not to heat water or smelt iron. Conversely, they could not use the heat energy produced by burning wood to make a millstone move. Humans had only one machine capable of performing such energy conversion tricks: the body. In the natural process of metabolism, the bodies of humans and other animals burn organic fuels known as food and convert the released energy into the movement of muscles. Men, women and beasts could consume grain and meat, burn up their carbohydrates and fats, and use the energy to haul a rickshaw or pull a plough.
Since human and animal bodies were the only energy conversion device available, muscle power was the key to almost all human activities. Human muscles built carts and houses, ox muscles ploughed fields, and horse muscles transported goods. The energy that fuelled these organic muscle-machines came ultimately from a single source – plants. Plants in their turn obtained their energy from the sun. By the process of photosynthesis, they captured solar energy and packed it into organic compounds. Almost everything people did throughout history was fuelled by solar energy that was captured by plants and converted into muscle power.
Human history was consequently dominated by two main cycles: the growth cycles of plants and the changing cycles of solar energy (day and night, summer and winter). When sunlight was scarce and when wheat fields were still green, humans had little energy. Granaries were empty, tax collectors were idle, soldiers found it difficult to move and fight, and kings tended to keep the peace. When the sun shone brightly and the wheat ripened, peasants harvested the crops and filled the granaries. Tax collectors hurried to take their share. Soldiers flexed their muscles and sharpened their swords. Kings convened councils and planned their next campaigns. Everyone was fuelled by solar energy – captured and packaged in wheat, rice and potatoes.
Throughout these long millennia, day in and day out, people stood face to face with the most important invention in the history of energy production – and failed to notice it. It stared them in the eye every time a housewife or servant put up a kettle to boil water for tea or put a pot full of potatoes on the stove. The minute the water boiled, the lid of the kettle or the pot jumped. Heat was being converted to movement. But jumping pot lids were an annoyance, especially if you forgot the pot on the stove and the water boiled over. Nobody saw their real potential.
A partial breakthrough in converting heat into movement followed the invention of gunpowder in ninth-century China. At first, the idea of using gunpowder to propel projectiles was so counter-intuitive that for centuries gunpowder was used primarily to produce fire bombs. But eventually – perhaps after some bomb expert ground gunpowder in a mortar only to have the pestle shoot out with force – guns made their appearance. About 600 years passed between the invention of gunpowder and the development of effective artillery.
Even then, the idea of converting heat into motion remained so counter-intuitive that another three centuries went by before people invented the next machine that used heat to move things around. The new technology was born in British coal mines. As the British population swelled, forests were cut down to fuel the growing economy and make way for houses and fields. Britain suffered from an increasing shortage of firewood. It began burning coal as a substitute. Many coal seams were located in waterlogged areas, and flooding prevented miners from accessing the lower strata of the mines. It was a problem looking for a solution. Around 1700, a strange noise began reverberating around British mineshafts. That noise – harbinger of the Industrial Revolution – was subtle at first, but it grew louder and louder with each passing decade until it enveloped the entire world in a deafening cacophony. It emanated from a steam engine.
There are many types of steam engines, but they all share one common principle. You burn some kind of fuel, such as coal, and use the resulting heat to boil water, producing steam. As the steam expands it pushes a piston. The piston moves, and anything that is connected to the piston moves with it. You have converted heat into movement! In eighteenth-century British coal mines, the piston was connected to a pump that extracted water from the bottom of the mineshafts. The earliest engines were incredibly inefficient. You needed to burn a huge load of coal in order to pump out even a tiny amount of water. But in the mines coal was plentiful and close at hand, so nobody cared.
In the decades that followed, British entrepreneurs improved the efficiency of the steam engine, brought it out of the mineshafts, and connected it to looms and gins. This revolutionised textile production, making it possible to produce ever-larger quantities of cheap textiles. In the blink of an eye, Britain became the workshop of the world. But even more importantly, getting the steam engine out of the mines broke an important psychological barrier. If you could burn coal in order to move textile looms, why not use the same method to move other things, such as vehicles?
In 1825, a British engineer connected a steam engine to a train of mine wagons full of coal. The engine drew the wagons along an iron rail some thirteen miles long from the mine to the nearest harbour. This was the first steam-powered locomotive in history. Clearly, if steam could be used to transport coal, why not other goods? And why not even people? On 15 September 1830, the first commercial railway line was opened, connecting Liverpool with Manchester. The trains moved under the same steam power that had previously pumped water and moved textile looms. A mere twenty years later, Britain had tens of thousands of miles of railway tracks.1
Henceforth, people became obsessed with the idea that machines and engines could be used to convert one type of energy into another. Any type of energy, anywhere in the world, might be harnessed to whatever need we had, if we could just invent the right machine. For example, when physicists realised that an immense amount of energy is stored within atoms, they immediately started thinking about how this energy could be released and used to make electricity, power submarines and annihilate cities. Six hundred years passed between the moment Chinese alchemists discovered gunpowder and the moment Turkish cannon pulverised the walls of Constantinople. Only forty years passed between the moment Einstein determined that any kind of mass could be converted into energy – that’s what E = mc² means – and the moment atom bombs obliterated Hiroshima and Nagasaki and nuclear power stations mushroomed all over the globe.
Another crucial discovery was the internal combustion engine, which took little more than a generation to revolutionise human transportation and turn petroleum into liquid political power. Petroleum had been known for thousands of years, and was used to waterproof roofs and lubricate axles. Yet until just a century ago nobody thought it was useful for much more than that. The idea of spilling blood for the sake of oil would have seemed ludicrous. You might fight a war over land, gold, pepper or slaves, but not oil.
The career of electricity was more startling yet. Two centuries ago electricity played no role in the economy, and was used at most for arcane scientific experiments and cheap magic tricks. A series of inventions turned it into our universal genie in a lamp. We flick our fingers and it prints books and sews clothes, keeps our vegetables fresh and our ice cream frozen, cooks our dinners and executes our criminals, registers our thoughts and records our smiles, lights up our nights and entertains us with countless television shows. Few of us understand how electricity does all these things, but even fewer can imagine life without it.
At heart, the Industrial Revolution has been a revolution in energy conversion. It has demonstrated again and again that there is no limit to the amount of energy at our disposal. Or, more precisely, that the only limit is set by our ignorance. Every few decades we discover a new energy source, so that the sum total of energy at our disposal just keeps growing.
Why are so many people afraid that we are running out of energy? Why do they warn of disaster if we exhaust all available fossil fuels? Clearly the world does not lack energy. All we lack is the knowledge necessary to harness and convert it to our needs. The amount of energy stored in all the fossil fuel on earth is negligible compared to the amount that the sun dispenses every day, free of charge. Only a tiny proportion of the sun’s energy reaches us, yet it amounts to 3,766,800 exajoules of energy each year (a joule is a unit of energy in the metric system, about the amount you expend to lift a small apple one yard straight up; an exajoule is a billion billion joules – that’s a lot of apples).2 All the world’s plants capture only about 3,000 of those solar exajoules through the process of photosynthesis.3 All human activities and industries put together consume about 500 exajoules annually, equivalent to the amount of energy earth receives from the sun in just ninety minutes.4 And that’s only solar energy. In addition, we are surrounded by other enormous sources of energy, such as nuclear energy and gravitational energy, the latter most evident in the power of the ocean tides caused by the moon’s pull on the earth.
Prior to the Industrial Revolution, the human energy market was almost completely dependent on plants. People lived alongside a green energy reservoir carrying 3,000 exajoules a year, and tried to pump as much of its energy as they could. Yet there was a clear limit to how much they could extract. During the Industrial Revolution, we came to realise that we are actually living alongside an enormous ocean of energy, one holding billions upon billions of exajoules of potential power. All we need to do is invent better pumps.
Learning how to harness and convert energy effectively solved the other problem that slows economic growth – the scarcity of raw materials. As humans worked out how to harness large quantities of cheap energy, they could begin exploiting previously inaccessible deposits of raw materials (for example, mining iron in the Siberian wastelands), or transporting raw materials from ever more distant locations (for example, supplying a British textile mill with Australian wool). Simultaneously, scientific breakthroughs enabled humankind to invent completely new raw materials, such as plastic, and discover previously unknown natural materials, such as silicon and aluminium.
Chemists discovered aluminium only in the 1820s, but separating the metal from its ore was extremely difficult and costly. For decades, aluminium was much more expensive than gold. In the 1860s, Emperor Napoleon III of France commissioned aluminium cutlery to be laid out for his most distinguished guests. Less important visitors had to make do with the gold knives and forks.5 But at the end of the nineteenth century chemists discovered a way to extract immense amounts of cheap aluminium, and current global production stands at 30 million tons per year. Napoleon III would be surprised to hear that his subjects’ descendants use cheap disposable aluminium foil to wrap their sandwiches and put away their leftovers.
Two thousand years ago, when people in the Mediterranean basin suffered from dry skin they smeared olive oil on their hands. Today, they open a tube of hand cream. Below is the list of ingredients of a simple modern hand cream that I bought at a local store:
deionised water, stearic acid, glycerin, caprylic/caprictiglyceride, propylene glycol, isopropyl myristate, panax ginseng root extract, fragrance, cetyl alcohol, triethanolamine, dimeticone, arctostaphylos uva-ursi leaf extract, magnesium ascorbyl phosphate, imidazolidinyl urea, methyl paraben, camphor, propyl paraben, hydroxyisohexyl 3-cyclohexene carboxaldehyde, hydroxycitronellal, linalool, butylphenyl methylproplonal, citronnellol, limonene, geraniol.
Almost all of these ingredients were invented or discovered in the last two centuries.
During World War One, Germany was placed under blockade and suffered severe shortages of raw materials, in particular saltpetre, an essential ingredient in gunpowder and other explosives. The most important saltpetre deposits were in Chile and India; there were none at all in Germany. True, saltpetre could be replaced by ammonia, but that was expensive to produce as well. Luckily for the Germans, one of their fellow citizens, a Jewish chemist named Fritz Haber, had discovered in 1908 a process for producing ammonia literally out of thin air. When war broke out, the Germans used Haber’s discovery to commence industrial production of explosives using air as a raw material. Some scholars believe that if it hadn’t been for Haber’s discovery, Germany would have been forced to surrender long before November 1918.6 The discovery won Haber (who during the war also pioneered the use of poison gas in battle) a Nobel Prize in 1918. In chemistry, not in peace.
The Industrial Revolution yielded an unprecedented combination of cheap and abundant energy and cheap and abundant raw materials. The result was an explosion in human productivity. The explosion was felt first and foremost in agriculture. Usually, when we think of the Industrial Revolution, we think of an urban landscape of smoking chimneys, or the plight of exploited coal miners sweating in the bowels of the earth. Yet the Industrial Revolution was above all else the Second Agricultural Revolution.
During the last 200 years, industrial production methods became the mainstay of agriculture. Machines such as tractors began to undertake tasks that were previously performed by muscle power, or not performed at all. Fields and animals became vastly more productive thanks to artificial fertilisers, industrial insecticides and an entire arsenal of hormones and medications. Refrigerators, ships and aeroplanes have made it possible to store produce for months, and transport it quickly and cheaply to the other side of the world. Europeans began to dine on fresh Argentinian beef and Japanese sushi.
Even plants and animals were mechanised. Around the time that Homo sapiens was elevated to divine status by humanist religions, farm animals stopped being viewed as living creatures that could feel pain and distress, and instead came to be treated as machines. Today these animals are often mass-produced in factory-like facilities, their bodies shaped in accordance with industrial needs. They pass their entire lives as cogs in a giant production line, and the length and quality of their existence is determined by the profits and losses of business corporations. Even when the industry takes care to keep them alive, reasonably healthy and well fed, it has no intrinsic interest in the animals’ social and psychological needs (except when these have a direct impact on production).
Egg-laying hens, for example, have a complex world of behavioural needs and drives. They feel strong urges to scout their environment, forage and peck around, determine social hierarchies, build nests and groom themselves. But the egg industry often locks the hens inside tiny coops, and it is not uncommon for it to squeeze four hens to a cage, each given a floor space of about 10 by 8.5 inches. The hens receive sufficient food, but they are unable to claim a territory, build a nest or engage in other natural activities. Indeed, the cage is so small that hens are often unable even to flap their wings or stand fully erect.
Pigs are among the most intelligent and inquisitive of mammals, second perhaps only to the great apes. Yet industrialised pig farms routinely confine nursing sows inside such small crates that they are literally unable to turn around (not to mention walk or forage). The sows are kept in these crates day and night for four weeks after giving birth. Their offspring are then taken away to be fattened up and the sows are impregnated with the next litter of piglets.
Many dairy cows live almost all their allotted years inside a small enclosure; standing, sitting and sleeping in their own urine and excrement. They receive their measure of food, hormones and medications from one set of machines, and get milked every few hours by another set of machines. The cow in the middle is treated as little more than a mouth that takes in raw materials and an udder that produces a commodity. Treating living creatures possessing complex emotional worlds as if they were machines is likely to cause them not only physical discomfort, but also much social stress and psychological frustration.7
41. Chicks on a conveyor belt in a commercial hatchery. Male chicks and imperfect female chicks are picked off the conveyor belt and are then asphyxiated in gas chambers, dropped into automatic shredders, or simply thrown into the rubbish, where they are crushed to death. Hundreds of millions of chicks die each year in such hatcheries.
{Photo and © Anonymous for Animal Rights (Israel).}
Just as the Atlantic slave trade did not stem from hatred towards Africans, so the modern animal industry is not motivated by animosity. Again, it is fuelled by indifference. Most people who produce and consume eggs, milk and meat rarely stop to think about the fate of the chickens, cows or pigs whose flesh and emissions they are eating. Those who do think often argue that such animals are really little different from machines, devoid of sensations and emotions, incapable of suffering. Ironically, the same scientific disciplines which shape our milk machines and egg machines have lately demonstrated beyond reasonable doubt that mammals and birds have a complex sensory and emotional make-up. They not only feel physical pain, but can also suffer from emotional distress.
Evolutionary psychology maintains that the emotional and social needs of farm animals evolved in the wild, when they were essential for survival and reproduction. For example, a wild cow had to know how to form close relations with other cows and bulls, or else she could not survive and reproduce. In order to learn the necessary skills, evolution implanted in calves – as in the young of all other social mammals – a strong desire to play (playing is the mammalian way of learning social behaviour). And it implanted in them an even stronger desire to bond with their mothers, whose milk and care were essential for survival.
What happens if farmers now take a young calf, separate her from her mother, put her in a closed cage, give her food, water and inoculations against diseases, and then, when she is old enough, inseminate her with bull sperm? From an objective perspective, this calf no longer needs either maternal bonding or playmates in order to survive and reproduce. But from a subjective perspective, the calf still feels a very strong urge to bond with her mother and to play with other calves. If these urges are not fulfilled, the calf suffers greatly. This is the basic lesson of evolutionary psychology: a need shaped in the wild continues to be felt subjectively even if it is no longer really necessary for survival and reproduction. The tragedy of industrial agriculture is that it takes great care of the objective needs of animals, while neglecting their subjective needs.
The truth of this theory has been known at least since the 1950s, when the American psychologist Harry Harlow studied the development of monkeys. Harlow separated infant monkeys from their mothers several hours after birth. The monkeys were isolated inside cages, and then raised by dummy mothers. In each cage, Harlow placed two dummy mothers. One was made of metal wires, and was fitted with a milk bottle from which the infant monkey could suck. The other was made of wood covered with cloth, which made it resemble a real monkey mother, but it provided the infant monkey with no material sustenance whatsoever. It was assumed that the infants would cling to the nourishing metal mother rather than to the barren cloth one.
To Harlow’s surprise, the infant monkeys showed a marked preference for the cloth mother, spending most of their time with her. When the two mothers were placed in close proximity, the infants held on to the cloth mother even while they reached over to suck milk from the metal mother. Harlow suspected that perhaps the infants did so because they were cold. So he fitted an electric bulb inside the wire mother, which now radiated heat. Most of the monkeys, except for the very young ones, continued to prefer the cloth mother.
42. One of Harlow’s orphaned monkeys clings to the cloth mother even while sucking milk from the metal mother.
{© Photo Researchers/Visualphotos.com.}
Follow-up research showed that Harlow’s orphaned monkeys grew up to be emotionally disturbed even though they had received all the nourishment they required. They never fitted into monkey society, had difficulties communicating with other monkeys, and suffered from high levels of anxiety and aggression. The conclusion was inescapable: monkeys must have psychological needs and desires that go beyond their material requirements, and if these are not fulfilled, they will suffer greatly. Harlow’s infant monkeys preferred to spend their time in the hands of the barren cloth mother because they were looking for an emotional bond and not only for milk. In the following decades, numerous studies showed that this conclusion applies not only to monkeys, but to other mammals, as well as birds. At present, millions of farm animals are subjected to the same conditions as Harlow’s monkeys, as farmers routinely separate calves, kids and other youngsters from their mothers, to be raised in isolation.8
Altogether, tens of billions of farm animals live today as part of a mechanised assembly line, and about 50 billion of them are slaughtered annually. These industrial livestock methods have led to a sharp increase in agricultural production and in human food reserves. Together with the mechanisation of plant cultivation, industrial animal husbandry is the basis for the entire modern socio-economic order. Before the industrialisation of agriculture, most of the food produced in fields and farms was ‘wasted’ feeding peasants and farmyard animals. Only a small percentage was available to feed artisans, teachers, priests and bureaucrats. Consequently, in almost all societies peasants comprised more than 90 percent of the population. Following the industrialisation of agriculture, a shrinking number of farmers was enough to feed a growing number of clerks and factory hands. Today in the United States, only 2 per cent of the population makes a living from agriculture, yet this 2 per cent produces enough not only to feed the entire US population, but also to export surpluses to the rest of the world.9 Without the industrialisation of agriculture the urban Industrial Revolution could never have taken place – there would not have been enough hands and brains to staff factories and offices.
As those factories and offices absorbed the billions of hands and brains that were released from fieldwork, they began pouring out an unprecedented avalanche of products. Humans now produce far more steel, manufacture much more clothing, and build many more structures than ever before. In addition, they produce a mind-boggling array of previously unimaginable goods, such as light bulbs, mobile phones, cameras and dishwashers. For the first time in human history, supply began to outstrip demand. And an entirely new problem was born: who is going to buy all this stuff?
The modern capitalist economy must constantly increase production if it is to survive, like a shark that must swim or suffocate. Yet it’s not enough just to produce. Somebody must also buy the products, or industrialists and investors alike will go bust. To prevent this catastrophe and to make sure that people will always buy whatever new stuff industry produces, a new kind of ethic appeared: consumerism.
Most people throughout history lived under conditions of scarcity. Frugality was thus their watchword. The austere ethics of the Puritans and Spartans are but two famous examples. A good person avoided luxuries, never threw food away, and patched up torn trousers instead of buying a new pair. Only kings and nobles allowed themselves to renounce such values publicly and conspicuously flaunt their riches.
Consumerism sees the consumption of ever more products and services as a positive thing. It encourages people to treat themselves, spoil themselves, and even kill themselves slowly by overconsumption. Frugality is a disease to be cured. You don’t have to look far to see the consumer ethic in action – just read the back of a cereal box. Here’s a quote from a box of one of my favourite breakfast cereals, produced by an Israeli firm, Telma:
Sometimes you need a treat. Sometimes you need a little extra energy. There are times to watch your weight and times when you’ve just got to have something . . . right now! Telma offers a variety of tasty cereals just for you – treats without remorse.
The same package sports an ad for another brand of cereal called Health Treats:
Health Treats offers lots of grains, fruits and nuts for an experience that combines taste, pleasure and health. For an enjoyable treat in the middle of the day, suitable for a healthy lifestyle. A real treat with the wonderful taste of more [emphasis in the original].
Throughout most of history, people were likely to have been repelled rather than attracted by such a text. They would have branded it as selfish, decadent and morally corrupt. Consumerism has worked very hard, with the help of popular psychology (‘Just do it!’) to convince people that indulgence is good for you, whereas frugality is self-oppression.
It has succeeded. We are all good consumers. We buy countless products that we don’t really need, and that until yesterday we didn’t know existed. Manufacturers deliberately design short-term goods and invent new and unnecessary models of perfectly satisfactory products that we must purchase in order to stay ‘in’. Shopping has become a favourite pastime, and consumer goods have become essential mediators in relationships between family members, spouses and friends. Religious holidays such as Christmas have become shopping festivals. In the United States, even Memorial Day – originally a solemn day for remembering fallen soldiers – is now an occasion for special sales. Most people mark this day by going shopping, perhaps to prove that the defenders of freedom did not die in vain.
The flowering of the consumerist ethic is manifested most clearly in the food market. Traditional agricultural societies lived in the awful shade of starvation. In the affluent world of today one of the leading health problems is obesity, which strikes the poor (who stuff themselves with hamburgers and pizzas) even more severely than the rich (who eat organic salads and fruit smoothies). Each year the US population spends more money on diets than the amount needed to feed all the hungry people in the rest of the world. Obesity is a double victory for consumerism. Instead of eating little, which will lead to economic contraction, people eat too much and then buy diet products – contributing to economic growth twice over.
How can we square the consumerist ethic with the capitalist ethic of the business person, according to which profits should not be wasted, and should instead be reinvested in production? It’s simple. As in previous eras, there is today a division of labour between the elite and the masses. In medieval Europe, aristocrats spent their money carelessly on extravagant luxuries, whereas peasants lived frugally, minding every penny. Today, the tables have turned. The rich take great care managing their assets and investments, while the less well heeled go into debt buying cars and televisions they don’t really need.
The capitalist and consumerist ethics are two sides of the same coin, a merger of two commandments. The supreme commandment of the rich is ‘Invest!’ The supreme commandment of the rest of us is ‘Buy!’
The capitalist–consumerist ethic is revolutionary in another respect. Most previous ethical systems presented people with a pretty tough deal. They were promised paradise, but only if they cultivated compassion and tolerance, overcame craving and anger, and restrained their selfish interests. This was too tough for most. The history of ethics is a sad tale of wonderful ideals that nobody can live up to. Most Christians did not imitate Christ, most Buddhists failed to follow Buddha, and most Confucians would have caused Confucius a temper tantrum.
In contrast, most people today successfully live up to the capitalist–consumerist ideal. The new ethic promises paradise on condition that the rich remain greedy and spend their time making more money, and that the masses give free rein to their cravings and passions – and buy more and more. This is the first religion in history whose followers actually do what they are asked to do. How, though, do we know that we’ll really get paradise in return? We’ve seen it on television.
THE INDUSTRIAL REVOLUTION OPENED up new ways to convert energy and to produce goods, largely liberating humankind from its dependence on the surrounding ecosystem. Humans cut down forests, drained swamps, dammed rivers, flooded plains, laid down hundreds of thousands of miles of railroad tracks, and built skyscraping metropolises. As the world was moulded to fit the needs of Homo sapiens, habitats were destroyed and species went extinct. Our once green and blue planet is becoming a concrete and plastic shopping centre.
Today, the earth’s continents are home to billions of Sapiens. If you took all these people and put them on a large set of scales, their combined mass would be about 300 million tons. If you then took all our domesticated farmyard animals – cows, pigs, sheep and chickens – and placed them on an even larger set of scales, their mass would amount to about 700 million tons. In contrast, the combined mass of all surviving large wild animals – from porcupines and penguins to elephants and whales – is less than 100 million tons. Our children’s books, our iconography and our TV screens are still full of giraffes, wolves and chimpanzees, but the real world has very few of them left. There are about 80,000 giraffes in the world, compared to 1.5 billion cattle; only 200,000 wolves, compared to 400 million domesticated dogs; only 250,000 chimpanzees – in contrast to billions of humans. Humankind really has taken over the world.1
Ecological degradation is not the same as resource scarcity. As we saw in the previous chapter, the resources available to humankind are constantly increasing, and are likely to continue to do so. That’s why doomsday prophesies of resource scarcity are probably misplaced. In contrast, the fear of ecological degradation is only too well founded. The future may see Sapiens gaining control of a cornucopia of new materials and energy sources, while simultaneously destroying what remains of the natural habitat and driving most other species to extinction.
In fact, ecological turmoil might endanger the survival of Homo sapiens itself. Global warming, rising oceans and widespread pollution could make the earth less hospitable to our kind, and the future might consequently see a spiralling race between human power and human-induced natural disasters. As humans use their power to counter the forces of nature and subjugate the ecosystem to their needs and whims, they might cause more and more unanticipated and dangerous side effects. These are likely to be controllable only by even more drastic manipulations of the ecosystem, which would result in even worse chaos.
Many call this process ‘the destruction of nature’. But it’s not really destruction, it’s change. Nature cannot be destroyed. Sixty-five million years ago, an asteroid wiped out the dinosaurs, but in so doing opened the way forward for mammals. Today, humankind is driving many species into extinction and might even annihilate itself. But other organisms are doing quite well. Rats and cockroaches, for example, are in their heyday. These tenacious creatures would probably creep out from beneath the smoking rubble of a nuclear Armageddon, ready and able to spread their DNA. Perhaps 65 million years from now, intelligent rats will look back gratefully on the decimation wrought by humankind, just as we today can thank that dinosaur-busting asteroid.
Still, the rumours of our own extinction are premature. Since the Industrial Revolution, the world’s human population has burgeoned as never before. In 1700 the world was home to some 700 million humans. In 1800 there were 950 million of us. By 1900 we almost doubled our numbers to 1.6 billion. And by 2000 that quadrupled to 6 billion. Today there are just shy of 7 billion Sapiens.
While all these Sapiens have grown increasingly impervious to the whims of nature, they have become ever more subject to the dictates of modern industry and government. The Industrial Revolution opened the way to a long line of experiments in social engineering and an even longer series of unpremeditated changes in daily life and human mentality. One example among many is the replacement of the rhythms of traditional agriculture with the uniform and precise schedule of industry.
Traditional agriculture depended on cycles of natural time and organic growth. Most societies were unable to make precise time measurements, nor were they terribly interested in doing so. The world went about its business without clocks and timetables, subject only to the movements of the sun and the growth cycles of plants. There was no uniform working day, and all routines changed drastically from season to season. People knew where the sun was, and watched anxiously for portents of the rainy season and harvest time, but they did not know the hour and hardly cared about the year. If a lost time traveller popped up in a medieval village and asked a passerby, ‘What year is this?’ the villager would be as bewildered by the question as by the stranger’s ridiculous clothing.
In contrast to medieval peasants and shoemakers, modern industry cares little about the sun or the season. It sanctifies precision and uniformity. For example, in a medieval workshop each shoemaker made an entire shoe, from sole to buckle. If one shoemaker was late for work, it did not stall the others. However, in a modern footwear-factory assembly line, every worker mans a machine that produces just a small part of a shoe, which is then passed on to the next machine. If the worker who operates machine no. 5 has overslept, it stalls all the other machines. In order to prevent such calamities, everybody must adhere to a precise timetable. Each worker arrives at work at exactly the same time. Everybody takes their lunch break together, whether they are hungry or not. Everybody goes home when a whistle announces that the shift is over – not when they have finished their project.
43. Charlie Chaplin as a simple worker caught in the wheels of the industrial assembly line, from the film Modern Times (1936).
{© Chaplin/United Artists/The Kobal Collection/Max Munn Autrey.}
The Industrial Revolution turned the timetable and the assembly line into a template for almost all human activities. Shortly after factories imposed their time frames on human behaviour, schools too adopted precise timetables, followed by hospitals, government offices and grocery stores. Even in places devoid of assembly lines and machines, the timetable became king. If the shift at the factory ends at 5 P.M., the local pub had better be open for business by 5:02.
A crucial link in the spreading timetable system was public transportation. If workers needed to start their shift by 08:00, the train or bus had to reach the factory gate by 07:55. A few minutes’ delay would lower production and perhaps even lead to the lay-offs of the unfortunate latecomers. In 1784 a carriage service with a published schedule began operating in Britain. Its timetable specified only the hour of departure, not arrival. Back then, each British city and town had its own local time, which could differ from London time by up to half an hour. When it was 12:00 in London, it was perhaps 12:20 in Liverpool and 11:50 in Canterbury. Since there were no telephones, no radio or television, and no fast trains – who could know, and who cared?2
Ten years after the first commercial train service began operating between Liverpool and Manchester, in 1830, the first train timetable was issued. The trains were much faster than the old carriages, so the quirky differences in local hours became a severe nuisance. In 1847, British train companies put their heads together and agreed that henceforth all train timetables would be calibrated to Greenwich Observatory time, rather than the local times of Liverpool, Manchester or Glasgow. More and more institutions followed the lead of the train companies. Finally, in 1880, the British government took the unprecedented step of legislating that all timetables in Britain must follow Greenwich. For the first time in history, a country adopted a national time and obliged its population to live according to an artificial clock rather than local ones or sunrise-to-sunset cycles.
This modest beginning spawned a global network of timetables, synchronised down to the tiniest fractions of a second. When the broadcast media – first radio, then television – made their debut, they entered a world of timetables and became its main enforcers and evangelists. Among the first things radio stations broadcast were time signals, beeps that enabled far-flung settlements and ships at sea to set their clocks. Later, radio stations adopted the custom of broadcasting the news every hour. Nowadays, the first item of every news broadcast – more important even than the outbreak of war – is the time. During World War Two, BBC News was broadcast to Nazi-occupied Europe. Each news programme opened with a live broadcast of Big Ben tolling the hour – the magical sound of freedom. Ingenious German physicists found a way to determine the weather conditions in London based on tiny differences in the tone of the broadcast ding-dongs. This information offered invaluable help to the Luftwaffe. When the British Secret Service discovered this, they replaced the live broadcast with a set recording of the famous clock.
In order to run the timetable network, cheap but precise portable clocks became ubiquitous. In Assyrian, Sassanid or Inca cities there might have been at most a few sundials. In European medieval cities there was usually a single clock – a giant machine mounted on top of a high tower in the town square. These tower clocks were notoriously inaccurate, but since there were no other clocks in town to contradict them, it hardly made any difference. Today, a single affluent family generally has more timepieces at home than an entire medieval country. You can tell the time by looking at your wristwatch, glancing at your Android, peering at the alarm clock by your bed, gazing at the clock on the kitchen wall, staring at the microwave, catching a glimpse of the TV or DVD, or taking in the taskbar on your computer out of the corner of your eye. You need to make a conscious effort not to know what time it is.
The typical person consults these clocks several dozen times a day, because almost everything we do has to be done on time. An alarm clock wakes us up at 7 A.M., we heat our frozen bagel for exactly fifty seconds in the microwave, brush our teeth for three minutes until the electric toothbrush beeps, catch the 07:40 train to work, run on the treadmill at the gym until the beeper announces that half an hour is over, sit down in front of the TV at 7 P.M. to watch our favourite show, get interrupted at preordained moments by commercials that cost $1,000 per second, and eventually unload all our angst on a therapist who restricts our prattle to the now standard fifty-minute therapy hour.
The Industrial Revolution brought about dozens of major upheavals in human society. Adapting to industrial time is just one of them. Other notable examples include urbanisation, the disappearance of the peasantry, the rise of the industrial proletariat, the empowerment of the common person, democratisation, youth culture and the disintegration of patriarchy.
Yet all of these upheavals are dwarfed by the most momentous social revolution that ever befell humankind: the collapse of the family and the local community and their replacement by the state and the market. As best we can tell, from the earliest times, more than a million years ago, humans lived in small, intimate communities, most of whose members were kin. The Cognitive Revolution and the Agricultural Revolution did not change that. They glued together families and communities to create tribes, cities, kingdoms and empires, but families and communities remained the basic building blocks of all human societies. The Industrial Revolution, on the other hand, managed within little more than two centuries to break these building blocks into atoms. Most of the traditional functions of families and communities were handed over to states and markets.
Prior to the Industrial Revolution, the daily life of most humans ran its course within three ancient frames: the nuclear family, the extended family and the local intimate community.* Most people worked in the family business – the family farm or the family workshop, for example – or they worked in their neighbours’ family businesses. The family was also the welfare system, the health system, the education system, the construction industry, the trade union, the pension fund, the insurance company, the radio, the television, the newspapers, the bank and even the police.
When a person fell sick, the family took care of her. When a person grew old, the family supported her, and her children were her pension fund. When a person died, the family took care of the orphans. If a person wanted to build a hut, the family lent a hand. If a person wanted to open a business, the family raised the necessary money. If a person wanted to marry, the family chose, or at least vetted, the prospective spouse. If conflict arose with a neighbour, the family muscled in. But if a person’s illness was too grave for the family to manage, or a new business demanded too large an investment, or the neighbourhood quarrel escalated to the point of violence, the local community came to the rescue.
The community offered help on the basis of local traditions and an economy of favours, which often differed greatly from the supply and demand laws of the free market. In an old-fashioned medieval community, when my neighbour was in need, I helped build his hut and guard his sheep, without expecting any payment in return. When I was in need, my neighbour returned the favour. At the same time, the local potentate might have drafted all of us villagers to construct his castle without paying us a penny. In exchange, we counted on him to defend us against brigands and barbarians. Village life involved many transactions but few payments. There were some markets, of course, but their roles were limited. You could buy rare spices, cloth and tools, and hire the services of lawyers and doctors. Yet less than 10 per cent of commonly used products and services were bought in the market. Most human needs were taken care of by the family and the community.
There were also kingdoms and empires that performed important tasks such as waging wars, building roads and constructing palaces. For these purposes kings raised taxes and occasionally enlisted soldiers and labourers. Yet, with few exceptions, they tended to stay out of the daily affairs of families and communities. Even if they wanted to intervene, most kings could do so only with difficulty. Traditional agricultural economies had few surpluses with which to feed crowds of government officials, policemen, social workers, teachers and doctors. Consequently, most rulers did not develop mass welfare systems, health-care systems or educational systems. They left such matters in the hands of families and communities. Even on rare occasions when rulers tried to intervene more intensively in the daily lives of the peasantry (as happened, for example, in the Qin Empire in China), they did so by converting family heads and community elders into government agents.
Often enough, transportation and communication difficulties made it so difficult to intervene in the affairs of remote communities that many kingdoms preferred to cede even the most basic royal prerogatives – such as taxation and violence – to communities. The Ottoman Empire, for instance, allowed family vendettas to mete out justice, rather than supporting a large imperial police force. If my cousin killed somebody, the victim’s brother might kill me in sanctioned revenge. The sultan in Istanbul or even the provincial pasha did not intervene in such clashes, as long as violence remained within acceptable limits.
In the Chinese Ming Empire (1368–1644), the population was organised into the baojia system. Ten families were grouped to form a jia, and ten jia constituted a bao. When a member of a bao commited a crime, other bao members could be punished for it, in particular the bao elders. Taxes too were levied on the bao, and it was the responsibility of the bao elders rather than of the state officials to assess the situation of each family and determine the amount of tax it should pay. From the empire’s perspective, this system had a huge advantage. Instead of maintaining thousands of revenue officials and tax collectors, who would have to monitor the earnings and expenses of every family, these tasks were left to the community elders. The elders knew how much each villager was worth and they could usually enforce tax payments without involving the imperial army.
Many kingdoms and empires were in truth little more than large protection rackets. The king was the capo di tutti capi who collected protection money, and in return made sure that neighbouring crime syndicates and local small fry did not harm those under his protection. He did little else.
Life in the bosom of family and community was far from ideal. Families and communities could oppress their members no less brutally than do modern states and markets, and their internal dynamics were often fraught with tension and violence – yet people had little choice. A person who lost her family and community around 1750 was as good as dead. She had no job, no education and no support in times of sickness and distress. Nobody would loan her money or defend her if she got into trouble. There were no policemen, no social workers and no compulsory education. In order to survive, such a person quickly had to find an alternative family or community. Boys and girls who ran away from home could expect, at best, to become servants in some new family. At worst, there was the army or the brothel.
All this changed dramatically over the last two centuries. The Industrial Revolution gave the market immense new powers, provided the state with new means of communication and transportation, and placed at the government’s disposal an army of clerks, teachers, policemen and social workers. At first the market and the state discovered their path blocked by traditional families and communities who had little love for outside intervention. Parents and community elders were reluctant to let the younger generation be indoctrinated by nationalist education systems, conscripted into armies or turned into a rootless urban proletariat.
Over time, states and markets used their growing power to weaken the traditional bonds of family and community. The state sent its policemen to stop family vendettas and replace them with court decisions. The market sent its hawkers to wipe out longstanding local traditions and replace them with ever-changing commercial fashions. Yet this was not enough. In order really to break the power of family and community, they needed the help of a fifth column.
The state and the market approached people with an offer that could not be refused. ‘Become individuals,’ they said. ‘Marry whomever you desire, without asking permission from your parents. Take up whatever job suits you, even if community elders frown. Live wherever you wish, even if you cannot make it every week to the family dinner. You are no longer dependent on your family or your community. We, the state and the market, will take care of you instead. We will provide food, shelter, education, health, welfare and employment. We will provide pensions, insurance and protection.’
Romantic literature often presents the individual as somebody caught in a struggle against the state and the market. Nothing could be further from the truth. The state and the market are the mother and father of the individual, and the individual can survive only thanks to them. The market provides us with work, insurance and a pension. If we want to study a profession, the government’s schools are there to teach us. If we want to open a business, the bank loans us money. If we want to build a house, a construction company builds it and the bank gives us a mortgage, in some cases subsidised or insured by the state. If violence flares up, the police protect us. If we are sick for a few days, our health insurance takes care of us. If we are debilitated for months, national social services steps in. If we need around-the-clock assistance, we can go to the market and hire a nurse – usually some stranger from the other side of the world who takes care of us with the kind of devotion that we no longer expect from our own children. If we have the means, we can spend our golden years at a senior citizens’ home. The tax authorities treat us as individuals, and do not expect us to pay the neighbours’ taxes. The courts, too, see us as individuals, and never punish us for the crimes of our cousins.
Not only adult men, but also women and children, are recognised as individuals. Throughout most of history, women were often seen as the property of family or community. Modern states, on the other hand, see women as individuals, enjoying economic and legal rights independently of their family and community. They may hold their own bank accounts, decide whom to marry, and even choose to divorce or live on their own.
But the liberation of the individual comes at a cost. Many of us now bewail the loss of strong families and communities and feel alienated and threatened by the power the impersonal state and market wield over our lives. States and markets composed of alienated individuals can intervene in the lives of their members much more easily than states and markets composed of strong families and communities. When neighbours in a high-rise apartment building cannot even agree on how much to pay their janitor, how can we expect them to resist the state?
The deal between states, markets and individuals is an uneasy one. The state and the market disagree about their mutual rights and obligations, and individuals complain that both demand too much and provide too little. In many cases individuals are exploited by markets, and states employ their armies, police forces and bureaucracies to persecute individuals instead of defending them. Yet it is amazing that this deal works at all – however imperfectly. For it breaches countless generations of human social arrangements. Millions of years of evolution have designed us to live and think as community members. Within a mere two centuries we have become alienated individuals. Nothing testifies better to the awesome power of culture.
The nuclear family did not disappear completely from the modern landscape. When states and markets took from the family most of its economic and political roles, they left it some important emotional functions. The modern family is still supposed to provide for intimate needs, which state and market are (so far) incapable of providing. Yet even here the family is subject to increasing interventions. The market shapes to an ever-greater degree the way people conduct their romantic and sexual lives. Whereas traditionally the family was the main matchmaker, today it’s the market that tailors our romantic and sexual preferences, and then lends a hand in providing for them – for a fat fee. Previously bride and groom met in the family living room, and money passed from the hands of one father to another. Today courting is done at bars and cafés, and money passes from the hands of lovers to waitresses. Even more money is transferred to the bank accounts of fashion designers, gym managers, dieticians, cosmeticians and plastic surgeons, who help us arrive at the café looking as similar as possible to the market’s ideal of beauty.
Family and community vs. state and market
The state, too, keeps a sharper eye on family relations, especially between parents and children. In many countries parents are obliged to send their children to be educated in government schools, and even where private education is allowed, the state still supervises and vets the curriculum. Parents who are especially abusive or violent with their children may be restrained by the state. If need be, the state may even imprison the parents or transfer their children to foster families. Until not long ago, the suggestion that the state ought to prevent parents from beating or humiliating their children would have been rejected out of hand as ludicrous and unworkable. In most societies parental authority was sacred. Respect of and obedience to one’s parents were among the most hallowed values, and parents could do almost anything they wanted, including killing newborn babies, selling children into slavery and marrying off daughters to men more than twice their age. Today, parental authority is in full retreat. Youngsters are increasingly excused from obeying their elders, whereas parents are blamed for anything that goes wrong in the life of their child. Mum and Dad are about as likely to be found innocent in the Freudian courtroom as were defendants in a Stalinist show trial.
Like the nuclear family, the community could not completely disappear from our world without any emotional replacement. Markets and states today provide most of the material needs once provided by communities, but they must also supply tribal bonds.
Markets and states do so by fostering ‘imagined communities’ that contain millions of strangers, and which are tailored to national and commercial needs. An imagined community is a community of people who don’t really know each other, but imagine that they do. Such communities are not a novel invention. Kingdoms, empires and churches functioned for millennia as imagined communities. In ancient China, tens of millions of people saw themselves as members of a single family, with the emperor as its father. In the Middle Ages, millions of devout Muslims imagined that they were all brothers and sisters in the great community of Islam. Yet throughout history, such imagined communities played second fiddle to intimate communities of several dozen people who knew each other well. The intimate communities fulfilled the emotional needs of their members and were essential for everyone’s survival and welfare. In the last two centuries, the intimate communities have withered, leaving imagined communities to fill in the emotional vacuum.
The two most important examples for the rise of such imagined communities are the nation and the consumer tribe. The nation is the imagined community of the state. The consumer tribe is the imagined community of the market. Both are imagined communities because it is impossible for all customers in a market or for all members of a nation really to know one another the way villagers knew one another in the past. No German can intimately know the other 80 million members of the German nation, or the other 500 million customers inhabiting the European Common Market (which evolved first into the European Community and finally became the European Union).
Consumerism and nationalism work extra hours to make us imagine that millions of strangers belong to the same community as ourselves, that we all have a common past, common interests and a common future. This isn’t a lie. It’s imagination. Like money, limited liability companies and human rights, nations and consumer tribes are inter-subjective realities. They exist only in our collective imagination, yet their power is immense. As long as millions of Germans believe in the existence of a German nation, get excited at the sight of German national symbols, retell German national myths, and are willing to sacrifice money, time and limbs for the German nation, Germany will remain one of the strongest powers in the world.
The nation does its best to hide its imagined character. Most nations argue that they are a natural and eternal entity, created in some primordial epoch by mixing the soil of the motherland with the blood of the people. Yet such claims are usually exaggerated. Nations existed in the distant past, but their importance was much smaller than today because the importance of the state was much smaller. A resident of medieval Nuremberg might have felt some loyalty towards the German nation, but she felt far more loyalty towards her family and local community, which took care of most of her needs. Moreover, whatever importance ancient nations may have had, few of them survived. Most existing nations evolved only after the Industrial Revolution.
The Middle East provides ample examples. The Syrian, Lebanese, Jordanian and Iraqi nations are the product of haphazard borders drawn in the sand by French and British diplomats who ignored local history, geography and economy. These diplomats determined in 1918 that the people of Kurdistan, Baghdad and Basra would henceforth be ‘Iraqis’. It was primarily the French who decided who would be Syrian and who Lebanese. Saddam Hussein and Hafez el-Asad tried their best to promote and reinforce their Anglo-French-manufactured national consciousnesses, but their bombastic speeches about the allegedly eternal Iraqi and Syrian nations had a hollow ring.
It goes without saying that nations cannot be created from thin air. Those who worked hard to construct Iraq or Syria made use of real historical, geographical and cultural raw materials – some of which are centuries and millennia old. Saddam Hussein co-opted the heritage of the Abbasid caliphate and the Babylonian Empire, even calling one of his crack armoured units the Hammurabi Division. Yet that does not turn the Iraqi nation into an ancient entity. If I bake a cake from flour, oil and sugar, all of which have been sitting in my pantry for the past two months, it does not mean that the cake itself is two months old.
In recent decades, national communities have been increasingly eclipsed by tribes of customers who do not know one another intimately but share the same consumption habits and interests, and therefore feel part of the same consumer tribe – and define themselves as such. This sounds very strange, but we are surrounded by examples. Madonna fans, for example, constitute a consumer tribe. They define themselves largely by shopping. They buy Madonna concert tickets, CDs, posters, shirts and ring tones, and thereby define who they are. Manchester United fans, vegetarians and environmentalists are other examples. They, too, are defined above all by what they consume. It is the keystone of their identity. A German vegetarian might well prefer to marry a French vegetarian than a German carnivore.
The revolutions of the last two centuries have been so swift and radical that they have changed the most fundamental characteristic of the social order. Traditionally, the social order was hard and rigid. ‘Order’ implied stability and continuity. Swift social revolutions were exceptional, and most social transformations resulted from the accumulation of numerous small steps. Humans tended to assume that the social structure was inflexible and eternal. Families and communities might struggle to change their place within the order, but the idea that you could change the fundamental structure of the order was alien. People tended to reconcile themselves to the status quo, declaring that ‘this is how it always was, and this is how it always will be’.
Over the last two centuries, the pace of change became so quick that the social order acquired a dynamic and malleable nature. It now exists in a state of permanent flux. When we speak of modern revolutions we tend to think of 1789 (the French Revolution), 1848 (the liberal revolutions) or 1917 (the Russian Revolution). But the fact is that, these days, every year is revolutionary. Today, even a thirty-year-old can honestly tell disbelieving teenagers, ‘When I was young, the world was completely different.’ The Internet, for example, came into wide usage only in the early 1990s, hardly twenty years ago. Today we cannot imagine the world without it.
Hence any attempt to define the characteristics of modern society is akin to defining the colour of a chameleon. The only characteristic of which we can be certain is the incessant change. People have become used to this, and most of us think about the social order as something flexible, which we can engineer and improve at will. The main promise of premodern rulers was to safeguard the traditional order or even to go back to some lost golden age. In the last two centuries, the currency of politics is that it promises to destroy the old world and build a better one in its place. Not even the most conservative of political parties vows merely to keep things as they are. Everybody promises social reform, educational reform, economic reform – and they often fulfil those promises.
Just as geologists expect that tectonic movements will result in earthquakes and volcanic eruptions, so might we expect that drastic social movements will result in bloody outbursts of violence. The political history of the nineteenth and twentieth centuries is often told as a series of deadly wars, holocausts and revolutions. Like a child in new boots leaping from puddle to puddle, this view sees history as leapfrogging from one bloodbath to the next, from World War One to World War Two to the Cold War, from the Armenian genocide to the Jewish genocide to the Rwandan genocide, from Robespierre to Lenin to Hitler.
There is truth here, but this all too familiar list of calamities is somewhat misleading. We focus too much on the puddles and forget about the dry land separating them. The late modern era has seen unprecedented levels not only of violence and horror, but also of peace and tranquillity. Charles Dickens wrote of the French Revolution that ‘It was the best of times, it was the worst of times.’ This may be true not only of the French Revolution, but of the entire era it heralded.
It is especially true of the seven decades that have elapsed since the end of World War Two. During this period humankind has for the first time faced the possibility of complete self-annihilation and has experienced a fair number of actual wars and genocides. Yet these decades were also the most peaceful era in human history – and by a wide margin. This is surprising because these very same decades experienced more economic, social and political change than any previous era. The tectonic plates of history are moving at a frantic pace, but the volcanoes are mostly silent. The new elastic order seems to be able to contain and even initiate radical structural changes without collapsing into violent conflict.3
Most people don’t appreciate just how peaceful an era we live in. None of us was alive a thousand years ago, so we easily forget how much more violent the world used to be. And as wars become more rare they attract more attention. Many more people think about the wars raging today in Afghanistan and Iraq than about the peace in which most Brazilians and Indians live.
Even more importantly, it’s easier to relate to the suffering of individuals than of entire populations. However, in order to understand macro-historical processes, we need to examine mass statistics rather than individual stories. In the year 2000, wars caused the deaths of 310,000 individuals, and violent crime killed another 520,000. Each and every victim is a world destroyed, a family ruined, friends and relatives scarred for life. Yet from a macro perspective these 830,000 victims comprised only 1.5 per cent of the 56 million people who died in 2000. That year 1.26 million people died in car accidents (2.25 per cent of total mortality) and 815,000 people committed suicide (1.45 per cent).4
The figures for 2002 are even more surprising. Out of 57 million dead, only 172,000 people died in war and 569,000 died of violent crime (a total of 741,000 victims of human violence). In contrast, 873,000 people committed suicide.5 It turns out that in the year following the 9/11 attacks, despite all the talk of terrorism and war, the average person was more likely to kill himself than to be killed by a terrorist, a soldier or a drug dealer.
In most parts of the world, people go to sleep without fearing that in the middle of the night a neighbouring tribe might surround their village and slaughter everyone. Well-off British subjects travel daily from Nottingham to London through Sherwood Forest without fear that a gang of merry green-clad brigands will ambush them and take their money to give to the poor (or, more likely, murder them and take the money for themselves). Students brook no canings from their teachers, children need not fear that they will be sold into slavery when their parents can’t pay their bills, and women know that the law forbids their husbands from beating them and forcing them to stay at home. Increasingly, around the world, these expectations are fulfilled.
The decline of violence is due largely to the rise of the state. Throughout history, most violence resulted from local feuds between families and communities. (Even today, as the above figures indicate, local crime is a far deadlier threat than international wars.) As we have seen, early farmers, who knew no political organisations larger than the local community, suffered rampant violence.6 As kingdoms and empires became stronger, they reined in communities and the level of violence decreased. In the decentralised kingdoms of medieval Europe, about twenty to forty people were murdered each year for every 100,000 inhabitants. In recent decades, when states and markets have become all-powerful and communities have vanished, violence rates have dropped even further. Today the global average is only nine murders a year per 100,000 people, and most of these murders take place in weak states such as Somalia and Colombia. In the centralised states of Europe, the average is one murder a year per 100,000 people.7
There are certainly cases where states use their power to kill their own citizens, and these often loom large in our memories and fears. During the twentieth century, tens of millions if not hundreds of millions of people were killed by the security forces of their own states. Still, from a macro perspective, state-run courts and police forces have probably increased the level of security worldwide. Even in oppressive dictatorships, the average modern person is far less likely to die at the hands of another person than in premodern societies. In 1964 a military dictatorship was established in Brazil. It ruled the country until 1985. During these twenty years, several thousand Brazilians were murdered by the regime. Thousands more were imprisoned and tortured. Yet even in the worst years, the average Brazilian in Rio de Janeiro was far less likely to die at human hands than the average Waorani, Arawete or Yanomamo are, indigenous people who live in the depths of the Amazon forest, without army, police or prisons. Anthropological studies have indicated that between a quarter and a half of their menfolk die sooner or later in violent conflicts over property, women or prestige.8
It is perhaps debatable whether violence within states has decreased or increased since 1945. What nobody can deny is that international violence has dropped to an all-time low. Perhaps the most obvious example is the collapse of the European empires. Throughout history empires have crushed rebellions with an iron fist, and when its day came, a sinking empire used all its might to save itself, usually collapsing into a bloodbath. Its final demise generally led to anarchy and wars of succession. Since 1945 most empires have opted for peaceful early retirement. Their process of collapse became relatively swift, calm and orderly.
In 1945 Britain ruled a quarter of the globe. Thirty years later it ruled just a few small islands. In the intervening decades it retreated from most of its colonies in a peaceful and orderly manner. Though in some places such as Malaya and Kenya the British tried to hang on by force of arms, in most places they accepted the end of empire with a sigh rather than with a temper tantrum. They focused their efforts not on retaining power, but on transferring it as smoothly as possible. At least some of the praise usually heaped on Mahatma Gandhi for his non-violent creed is actually owed to the British Empire. Despite many years of bitter and often violent struggle, when the end of the Raj came, the Indians did not have to fight the British in the streets of Delhi and Calcutta. The empire’s place was taken by a slew of independent states, most of which have since enjoyed stable borders and have for the most part lived peacefully alongside their neighbours. True, tens of thousands of people perished at the hands of the threatened British Empire, and in several hot spots its retreat led to the eruption of ethnic conflicts that claimed hundreds of thousands of lives (particularly in India). Yet when compared to the long-term historical average, the British withdrawal was an exemplar of peace and order. The French Empire was more stubborn. Its collapse involved bloody rearguard actions in Vietnam and Algeria that cost hundreds of thousands of lives. Yet the French, too, retreated from the rest of their dominions quickly and peacefully, leaving behind orderly states rather than a chaotic free-for-all.
The Soviet collapse in 1989 was even more peaceful, despite the eruption of ethnic conflict in the Balkans, the Caucasus and Central Asia. Never before has such a mighty empire disappeared so swiftly and so quietly. The Soviet Empire of 1989 had suffered no military defeat except in Afghanistan, no external invasions, no rebellions, nor even large-scale Martin Luther King-style campaigns of civil disobedience. The Soviets still had millions of soldiers, tens of thousands of tanks and aeroplanes, and enough nuclear weapons to wipe out the whole of humankind several times over. The Red Army and the other Warsaw Pact armies remained loyal. Had the last Soviet ruler, Mikhail Gorbachev, given the order, the Red Army would have opened fire on the subjugated masses.
Yet the Soviet elite, and the Communist regimes through most of eastern Europe (Romania and Serbia were the exceptions), chose not to use even a tiny fraction of this military power. When its members realised that Communism was bankrupt, they renounced force, admitted their failure, packed their suitcases and went home. Gorbachev and his colleagues gave up without a struggle not only the Soviet conquests of World War Two, but also the much older tsarist conquests in the Baltic, the Ukraine, the Caucasus and Central Asia. It is chilling to contemplate what might have happened if Gorbachev had behaved like the Serbian leadership – or like the French in Algeria.
The independent states that came after these empires were remarkably uninterested in war. With very few exceptions, since 1945 states no longer invade other states in order to conquer and swallow them up. Such conquests had been the bread and butter of political history since time immemorial. It was how most great empires were established, and how most rulers and populations expected things to stay. But campaigns of conquest like those of the Romans, Mongols and Ottomans cannot take place today anywhere in the world. Since 1945, no independent country recognised by the UN has been conquered and wiped off the map. Limited international wars still occur from time to time, and millions still die in wars, but wars are no longer the norm.
Many people believe that the disappearance of international war is unique to the rich democracies of western Europe. In fact, peace reached Europe after it prevailed in other parts of the world. Thus the last serious international wars between South American countries were the Peru–Ecuador War of 1941 and the Bolivia–Paraguay War of 1932–5. And before that there hadn’t been a serious war between South American countries since 1879–84, with Chile on one side and Bolivia and Peru on the other.
We seldom think of the Arab world as particularly peaceful. Yet only once since the Arab countries won their independence has one of them mounted a full-scale invasion of another (the Iraqi invasion of Kuwait in 1990). There have been quite a few border clashes (e.g. Syria vs Jordan in 1970), many armed interventions of one in the affairs of another (e.g. Syria in Lebanon), numerous civil wars (Algeria, Yemen, Libya) and an abundance of coups and revolts. Yet there have been no full-scale international wars among the Arab states except the Gulf War. Even widening the scope to include the entire Muslim world adds only one more example, the Iran–Iraq War. There was no Turkey–Iran War, Pakistan–Afghanistan War, or Indonesia–Malaysia War.
In Africa things are far less rosy. But even there, most conflicts are civil wars and coups. Since African states won their independence in the 1960s and 1970s, very few countries have invaded one another in the hope of conquest.
There have been periods of relative calm before, as, for example, in Europe between 1871 and 1914, and they always ended badly. But this time it is different. For real peace is not the mere absence of war. Real peace is the implausibility of war. There has never been real peace in the world. Between 1871 and 1914, a European war remained a plausible eventuality, and the expectation of war dominated the thinking of armies, politicians and ordinary citizens alike. This foreboding was true for all other peaceful periods in history. An iron law of international politics decreed, ‘For every two nearby polities, there is a plausible scenario that will cause them to go to war against one another within one year.’ This law of the jungle was in force in late nineteenth-century Europe, in medieval Europe, in ancient China and in classical Greece. If Sparta and Athens were at peace in 450 BC, there was a plausible scenario that they would be at war by 449 BC.
Today humankind has broken the law of the jungle. There is at last real peace, and not just absence of war. For most polities, there is no plausible scenario leading to full-scale conflict within one year. What could lead to war between Germany and France next year? Or between China and Japan? Or between Brazil and Argentina? Some minor border clash might occur, but only a truly apocalyptic scenario could result in an old-fashioned full-scale war between Brazil and Argentina in 2014, with Argentinian armoured divisions sweeping to the gates of Rio, and Brazilian carpet-bombers pulverising the neighbourhoods of Buenos Aires. Such wars might still erupt between several pairs of states, e.g. between Israel and Syria, Ethiopia and Eritrea, or the USA and Iran, but these are only the exceptions that prove the rule.
This situation might of course change in the future and, with hindsight, the world of today might seem incredibly naïve. Yet from a historical perspective, our very naïvety is fascinating. Never before has peace been so prevalent that people could not even imagine war.
{Lithograph from a photo by Fishbourne & Gow, San Francisco, 1850s © Corbis.}
44. and 45. Gold miners in California during the Gold Rush, and Facebook’s headquarters near San Francisco. In 1849 California built its fortunes on gold. Today, California builds its fortunes on silicon. But whereas in 1849 the gold actually lay there in the Californian soil, the real treasures of Silicon Valley are locked inside the heads of high-tech employees.
{© Proehl Studios/Corbis.}
Scholars have sought to explain this happy development in more books and articles than you would ever want to read yourself, and they have identified several contributing factors. First and foremost, the price of war has gone up dramatically. The Nobel Peace Prize to end all peace prizes should have been given to Robert Oppenheimer and his fellow architects of the atomic bomb. Nuclear weapons have turned war between superpowers into collective suicide, and made it impossible to seek world domination by force of arms.
Secondly, while the price of war soared, its profits declined. For most of history, polities could enrich themselves by looting or annexing enemy territories. Most wealth consisted of material things like fields, cattle, slaves and gold, so it was easy to loot it or occupy it. Today, wealth consists mainly of human capital and organizational know-how. Consequently it is difficult to carry it off or conquer it by military force.
Consider California. Its wealth was initially built on gold mines. But today it is built on silicon and celluloid – Silicon Valley and the celluloid hills of Hollywood. What would happen if the Chinese were to mount an armed invasion of California, land a million soldiers on the beaches of San Francisco and storm inland? They would gain little. There are no silicon mines in Silicon Valley. The wealth resides in the minds of Google engineers and Hollywood script doctors, directors and special-effects wizards, who would be on the first plane to Bangalore or Mumbai long before the Chinese tanks rolled into Sunset Boulevard. It is not coincidental that the few full-scale international wars that still take place in the world, such as the Iraqi invasion of Kuwait, occur in places where wealth is old-fashioned material wealth. The Kuwaiti sheikhs could flee abroad, but the oil fields stayed put and were occupied.
While war became less profitable, peace became more lucrative than ever. In traditional agricultural economies long-distance trade and foreign investment were sideshows. Consequently, peace brought little profit, aside from avoiding the costs of war. If, say, in 1400 England and France were at peace, the French did not have to pay heavy war taxes and to suffer destructive English invasions, but otherwise it did not benefit their wallets. In modern capitalist economies, foreign trade and investments have become all-important. Peace therefore brings unique dividends. As long as China and the USA are at peace, the Chinese can prosper by selling products to the USA, trading in Wall Street and receiving US investments.
Last but not least, a tectonic shift has taken place in global political culture. Many elites in history – Hun chieftains, Viking noblemen and Aztec priests, for example – viewed war as a positive good. Others viewed it as evil, but an inevitable one, which we had better turn to our own advantage. Ours is the first time in history that the world is dominated by a peace-loving elite – politicians, business people, intellectuals and artists who genuinely see war as both evil and avoidable. (There were pacifists in the past, such as the early Christians, but in the rare cases that they gained power, they tended to forget about their requirement to ‘turn the other cheek’.)
There is a positive feedback loop between all these four factors. The threat of nuclear holocaust fosters pacifism; when pacifism spreads, war recedes and trade flourishes; and trade increases both the profits of peace and the costs of war. Over time, this feedback loop creates another obstacle to war, which may ultimately prove the most important of all. The tightening web of international connections erodes the independence of most countries, lessening the chance that any one of them might single-handedly let slip the dogs of war. Most countries no longer engage in full-scale war for the simple reason that they are no longer independent. Though citizens in Israel, Italy, Mexico or Thailand may harbour illusions of independence, the fact is that their governments cannot conduct independent economic or foreign policies, and they are certainly incapable of initiating and conducting full-scale war on their own. As explained in Chapter 11, we are witnessing the formation of a global empire. Like previous empires, this one, too, enforces peace within its borders. And since its borders cover the entire globe, the World Empire effectively enforces world peace.
So, is the modern era one of mindless slaughter, war and oppression, typified by the trenches of World War One, the nuclear mushroom cloud over Hiroshima and the gory manias of Hitler and Stalin? Or is it an era of peace, epitomised by the trenches never dug in South America, the mushroom clouds that never appeared over Moscow and New York, and the serene visages of Mahatma Gandhi and Martin Luther King?
The answer is a matter of timing. It is sobering to realise how often our view of the past is distorted by events of the last few years. If this chapter had been written in 1945 or 1962, it would probably have been much more glum. Since it was written in 2014, it takes a relatively buoyant approach to modern history.
To satisfy both optimists and pessimists, we may conclude by saying that we are on the threshold of both heaven and hell, moving nervously between the gateway of the one and the anteroom of the other. History has still not decided where we will end up, and a string of coincidences might yet send us rolling in either direction.
THE LAST 500 YEARS HAVE WITNESSED A breathtaking series of revolutions. The earth has been united into a single ecological and historical sphere. The economy has grown exponentially, and humankind today enjoys the kind of wealth that used to be the stuff of fairy tales. Science and the Industrial Revolution have given humankind superhuman powers and practically limitless energy. The social order has been completely transformed, as have politics, daily life and human psychology.
But are we happier? Did the wealth humankind accumulated over the last five centuries translate into a new-found contentment? Did the discovery of inexhaustible energy resources open before us inexhaustible stores of bliss? Going further back, have the seventy or so turbulent millennia since the Cognitive Revolution made the world a better place to live? Was the late Neil Armstrong, whose footprint remains intact on the windless moon, happier than the nameless hunter-gatherer who 30,000 years ago left her handprint on a wall in Chauvet Cave? If not, what was the point of developing agriculture, cities, writing, coinage, empires, science and industry?
Historians seldom ask such questions. They do not ask whether the citizens of Uruk and Babylon were happier than their foraging ancestors, whether the rise of Islam made Egyptians more pleased with their lives, or how the collapse of the European empires in Africa have influenced the happiness of countless millions. Yet these are the most important questions one can ask of history. Most current ideologies and political programmes are based on rather flimsy ideas concerning the real source of human happiness. Nationalists believe that political self-determination is essential for our happiness. Communists postulate that everyone would be blissful under the dictatorship of the proletariat. Capitalists maintain that only the free market can ensure the greatest happiness of the greatest number, by creating economic growth and material abundance and by teaching people to be self-reliant and enterprising.
What would happen if serious research were to disprove these hypotheses? If economic growth and self-reliance do not make people happier, what’s the benefit of Capitalism? What if it turns out that the subjects of large empires are generally happier than the citizens of independent states and that, for example, Ghanaians were happier under British colonial rule than under their own homegrown dictators? What would that say about the process of decolonisation and the value of national self-determination?
These are all hypothetical possibilities, because so far historians have avoided raising these questions – not to mention answering them. They have researched the history of just about everything – politics, society, economics, gender, diseases, sexuality, food, clothing – yet they have seldom stopped to ask how these influence human happiness.
Though few have studied the long-term history of happiness, almost every scholar and layperson has some vague preconception about it. In one common view, human capabilities have increased throughout history. Since humans generally use their capabilities to alleviate miseries and fulfil aspirations, it follows that we must be happier than our medieval ancestors, and they must have been happier than Stone Age hunter-gatherers.
But this progressive account is unconvincing. As we have seen, new aptitudes, behaviours and skills do not necessarily make for a better life. When humans learned to farm in the Agricultural Revolution, their collective power to shape their environment increased, but the lot of many individual humans grew harsher. Peasants had to work harder than foragers to eke out less varied and nutritious food, and they were far more exposed to disease and exploitation. Similarly, the spread of European empires greatly increased the collective power of humankind, by circulating ideas, technologies and crops, and opening new avenues of commerce. Yet this was hardly good news for millions of Africans, Native Americans and Aboriginal Australians. Given the proven human propensity for misusing power, it seems naïve to believe that the more clout people have, the happier they will be.
Some challengers of this view take a diametrically opposed position. They argue for a reverse correlation between human capabilities and happiness. Power corrupts, they say, and as humankind gained more and more power, it created a cold mechanistic world ill-suited to our real needs. Evolution moulded our minds and bodies to the life of hunter-gatherers. The transition first to agriculture and then to industry has condemned us to living unnatural lives that cannot give full expression to our inherent inclinations and instincts, and therefore cannot satisfy our deepest yearnings. Nothing in the comfortable lives of the urban middle class can approach the wild excitement and sheer joy experienced by a forager band on a successful mammoth hunt. Every new invention just puts another mile between us and the Garden of Eden.
Yet this romantic insistence on seeing a dark shadow behind each invention is as dogmatic as the belief in the inevitability of progress. Perhaps we are out of touch with our inner hunter-gatherer, but it’s not all bad. For instance, over the last two centuries modern medicine has decreased child mortality from 33 per cent to less than 5 per cent. Can anyone doubt that this made a huge contribution to the happiness not only of those children who would otherwise have died, but also of their families and friends?
A more nuanced position takes the middle road. Until the Scientific Revolution there was no clear correlation between power and happiness. Medieval peasants may indeed have been more miserable than their hunter-gatherer forebears. But in the last few centuries humans have learned to use their capacities more wisely. The triumphs of modern medicine are just one example. Other unprecedented achievements include the steep drop in violence, the virtual disappearance of international wars, and the near elimination of large-scale famines.
Yet this, too, is an oversimplification. Firstly, it bases its optimistic assessment on a very small sample of years. The majority of humans began to enjoy the fruits of modern medicine no earlier than 1850, and the drastic drop in child mortality is a twentieth-century phenomenon. Mass famines continued to blight much of humanity up to the middle of the twentieth century. During Communist China’s Great Leap Forward of 1958–61, somewhere between 10 and 50 million human beings starved to death. International wars became rare only after 1945, largely thanks to the new threat of nuclear annihilation. Hence, though the last few decades have been an unprecedented golden age for humanity, it is too early to know whether this represents a fundamental shift in the currents of history or an ephemeral eddy of good fortune. When judging modernity, it is all too tempting to take the viewpoint of a twenty-first-century middle-class Westerner. We must not forget the viewpoints of a nineteenth-century Welsh coal miner, Chinese opium addict or Tasmanian Aborigine. Truganini is no less important than Homer Simpson.
Secondly, even the brief golden age of the last half-century may turn out to have sown the seeds of future catastrophe. Over the last few decades, we have been disturbing the ecological equilibrium of our planet in myriad new ways, with what seem likely to be dire consequences. A lot of evidence indicates that we are destroying the foundations of human prosperity in an orgy of reckless consumption.
Finally, we can congratulate ourselves on the unprecedented accomplishments of modern Sapiens only if we completely ignore the fate of all other animals. Much of the vaunted material wealth that shields us from disease and famine was accumulated at the expense of laboratory monkeys, dairy cows and conveyor-belt chickens. Over the last two centuries tens of billions of them have been subjected to a regime of industrial exploitation whose cruelty has no precedent in the annals of planet Earth. If we accept a mere tenth of what animal-rights activists are claiming, then modern industrial agriculture might well be the greatest crime in history. When evaluating global happiness, it is wrong to count the happiness only of the upper classes, of Europeans or of men. Perhaps it is also wrong to consider only the happiness of humans.
So far we have discussed happiness as if it were largely a product of material factors, such as health, diet and wealth. If people are richer and healthier, then they must also be happier. But is that really so obvious? Philosophers, priests and poets have brooded over the nature of happiness for millennia, and many have concluded that social, ethical and spiritual factors have as great an impact on our happiness as material conditions. Perhaps people in modern affluent societies suffer greatly from alienation and meaninglessness despite their prosperity. And perhaps our less well-to-do ancestors found much contentment in community, religion and a bond with nature.
In recent decades, psychologists and biologists have taken up the challenge of studying scientifically what really makes people happy. Is it money, family, genetics or perhaps virtue? The first step is to define what is to be measured. The generally accepted definition of happiness is ‘subjective well-being’. Happiness, according to this view, is something I feel inside myself, a sense of either immediate pleasure or long-term contentment with the way my life is going. If it’s something felt inside, how can it be measured from outside? Presumably, we can do so by asking people to tell us how they feel. So psychologists or biologists who want to assess how happy people feel give them questionnaires to fill out and tally the results.
A typical subjective well-being questionnaire asks interviewees to grade on a scale of zero to ten their agreement with statements such as ‘I feel pleased with the way I am’, ‘I feel that life is very rewarding’, ‘I am optimistic about the future’ and ‘Life is good’. The researcher then adds up all the answers and calculates the interviewee’s general level of subjective well-being.
Such questionnaires are used in order to correlate happiness with various objective factors. One study might compare a thousand people who earn $100,000 a year with a thousand people who earn $50,000. If the study discovers that the first group has an average subjective well-being level of 8.7, while the latter has an average of only 7.3, the researcher may reasonably conclude that there is a positive correlation between wealth and subjective well-being. To put it in simple English, money brings happiness. The same method can be used to examine whether people living in democracies are happier than people living in dictatorships, and whether married people are happier than singles, divorcees or widowers.
This provides a grounding for historians, who can examine wealth, political freedom and divorce rates in the past. If people are happier in democracies and married people are happier than divorcees, a historian has a basis for arguing that the democratisation process of the last few decades contributed to the happiness of humankind, whereas the growing rates of divorce indicate an opposite trend.
This way of thinking is not flawless, but before pointing out some of the holes, it is worth considering the findings.
One interesting conclusion is that money does indeed bring happiness. But only up to a point, and beyond that point it has little significance. For people stuck at the bottom of the economic ladder, more money means greater happiness. If you are an American single mother earning $12,000 a year cleaning houses and you suddenly win $500,000 in the lottery, you will probably experience a significant and long-term surge in your subjective well-being. You’ll be able to feed and clothe your children without sinking further into debt. However, if you’re a top executive earning $250,000 a year and you win $1 million in the lottery, or your company board suddenly decides to double your salary, your surge is likely to last only a few weeks. According to the empirical findings, it’s almost certainly not going to make a big difference to the way you feel over the long run. You’ll buy a snazzier car, move into a palatial home, get used to drinking Chateau Pétrus instead of California Cabernet, but it’ll soon all seem routine and unexceptional.
Another interesting finding is that illness decreases happiness in the short term, but is a source of long-term distress only if a person’s condition is constantly deteriorating or if the disease involves on-going and debilitating pain. People who are diagnosed with chronic illness such as diabetes are usually depressed for a while, but if the illness does not get worse they adjust to their new condition and rate their happiness as highly as healthy people do. Imagine that Lucy and Luke are middle-class twins, who agree to take part in a subjective well-being study. On the way back from the psychology laboratory, Lucy’s car is hit by a bus, leaving Lucy with a number of broken bones and a permanently lame leg. Just as the rescue crew is cutting her out of the wreckage, the phone rings and Luke shouts that he has won the lottery’s $10,000,000 jackpot. Two years later she’ll be limping and he’ll be a lot richer, but when the psychologist comes around for a follow-up study, they are both likely to give the same answers they did on the morning of that fateful day.
Family and community seem to have more impact on our happiness than money and health. People with strong families who live in tight-knit and supportive communities are significantly happier than people whose families are dysfunctional and who have never found (or never sought) a community to be part of. Marriage is particularly important. Repeated studies have found that there is a very close correlation between good marriages and high subjective well-being, and between bad marriages and misery. This holds true irrespective of economic or even physical conditions. An impecunious invalid surrounded by a loving spouse, a devoted family and a warm community may well feel better than an alienated billionaire, provided that the invalid’s poverty is not too severe and that his illness is not degenerative or painful.
This raises the possibility that the immense improvement in material conditions over the last two centuries was offset by the collapse of the family and the community. If so, the average person might well be no happier today than in 1800. Even the freedom we value so highly may be working against us. We can choose our spouses, friends and neighbours, but they can choose to leave us. With the individual wielding unprecedented power to decide her own path in life, we find it ever harder to make commitments. We thus live in an increasingly lonely world of unravelling communities and families.
But the most important finding of all is that happiness does not really depend on objective conditions of either wealth, health or even community. Rather, it depends on the correlation between objective conditions and subjective expectations. If you want a bullock-cart and get a bullock-cart, you are content. If you want a brand-new Ferrari and get only a second-hand Fiat you feel deprived. This is why winning the lottery has, over time, the same impact on people’s happiness as a debilitating car accident. When things improve, expectations balloon, and consequently even dramatic improvements in objective conditions can leave us dissatisfied. When things deteriorate, expectations shrink, and consequently even a severe illness might leave you pretty much as happy as you were before.
You might say that we didn’t need a bunch of psychologists and their questionnaires to discover this. Prophets, poets and philosophers realised thousands of years ago that being satisfied with what you already have is far more important than getting more of what you want. Still, it’s nice when modern research – bolstered by lots of numbers and charts – reaches the same conclusions the ancients did.
The crucial importance of human expectations has far-reaching implications for understanding the history of happiness. If happiness depended only on objective conditions such as wealth, health and social relations, it would have been relatively easy to investigate its history. The finding that it depends on subjective expectations makes the task of historians far harder. We moderns have an arsenal of tranquillisers and painkillers at our disposal, but our expectations of ease and pleasure, and our intolerance of inconvenience and discomfort, have increased to such an extent that we may well suffer from pain more than our ancestors ever did.
It’s hard to accept this line of thinking. The problem is a fallacy of reasoning embedded deep in our psyches. When we try to guess or imagine how happy other people are now, or how people in the past were, we inevitably imagine ourselves in their shoes. But that won’t work because it pastes our expectations on to the material conditions of others. In modern affluent societies it is customary to take a shower and change your clothes every day. Medieval peasants went without washing for months on end, and hardly ever changed their clothes. The very thought of living like that, filthy and reeking to the bone, is abhorrent to us. Yet medieval peasants seem not to have minded. They were used to the feel and smell of a long-unlaundered shirt. It’s not that they wanted a change of clothes but couldn’t get it – they had what they wanted. So, at least as far as clothing goes, they were content.
That’s not so surprising, when you think of it. After all, our chimpanzee cousins seldom wash and never change their clothes. Nor are we disgusted by the fact that our pet dogs and cats don’t shower or change their coats daily. We pat, hug and kiss them all the same. Small children in affluent societies often dislike showering, and it takes them years of education and parental discipline to adopt this supposedly attractive custom. It is all a matter of expectations.
If happiness is determined by expectations, then two pillars of our society – mass media and the advertising industry – may unwittingly be depleting the globe’s reservoirs of contentment. If you were an eighteen-year-old youth in a small village 5,000 years ago you’d probably think you were good-looking because there were only fifty other men in your village and most of them were either old, scarred and wrinkled, or still little kids. But if you are a teenager today you are a lot more likely to feel inadequate. Even if the other guys at school are an ugly lot, you don’t measure yourself against them but against the movie stars, athletes and supermodels you see all day on television, Facebook and giant billboards.
So maybe Third World discontent is fomented not merely by poverty, disease, corruption and political oppression but also by mere exposure to First World standards. The average Egyptian was far less likely to die from starvation, plague or violence under Hosni Mubarak than under Ramses II or Cleopatra. Never had the material condition of most Egyptians been so good. You’d think they would have been dancing in the streets in 2011, thanking Allah for their good fortune. Instead they rose up furiously to overthrow Mubarak. They weren’t comparing themselves to their ancestors under the pharaohs, but rather to their contemporaries in the affluent West.
If that’s the case, even immortality might lead to discontent. Suppose science comes up with cures for all diseases, effective anti-ageing therapies and regenerative treatments that keep people indefinitely young. In all likelihood, the immediate result will be an unprecedented epidemic of anger and anxiety.
Those unable to afford the new miracle treatments – the vast majority of people – will be beside themselves with rage. Throughout history, the poor and oppressed comforted themselves with the thought that at least death is even-handed – that the rich and powerful will also die. The poor will not be comfortable with the thought that they have to die, while the rich will remain young and beautiful for ever.
46. Football star Cristiano Ronaldo launches his underwear line. In previous eras the standard of beauty was set by the handful of people who lived next door to you. Today the media and the fashion industry expose us to a totally unrealistic standard of beauty. They search out the most gorgeous people on the planet, and then parade them constantly before our eyes. No wonder we are far less happy with the way we look.
{Europa Press via Getty Images.}
But the tiny minority able to afford the new treatments will not be euphoric either. They will have much to be anxious about. Although the new therapies could extend life and youth, they cannot revive corpses. How dreadful to think that I and my loved ones can live for ever, but only if we don’t get hit by a truck or blown to smithereens by a terrorist! Potentially a-mortal people are likely to grow averse to taking even the slightest risk, and the agony of losing a spouse, child or close friend will be unbearable.
Social scientists distribute subjective well-being questionnaires and correlate the results with socio-economic factors such as wealth and political freedom. Biologists use the same questionnaires, but correlate the answers people give them with biochemical and genetic factors. Their findings are shocking.
Biologists hold that our mental and emotional world is governed by biochemical mechanisms shaped by millions of years of evolution. Like all other mental states, our subjective well-being is not determined by external parameters such as salary, social relations or political rights. Rather, it is determined by a complex system of nerves, neurons, synapses and various biochemical substances such as serotonin, dopamine and oxytocin.
Nobody is ever made happy by winning the lottery, buying a house, getting a promotion or even finding true love. People are made happy by one thing and one thing only – pleasant sensations in their bodies. A person who just won the lottery or found new love and jumps from joy is not really reacting to the money or the lover. She is reacting to various hormones coursing through her bloodstream, and to the storm of electric signals flashing between different parts of her brain.
Unfortunately for all hopes of creating heaven on earth, our internal biochemical system seems to be programmed to keep happiness levels relatively constant. There’s no natural selection for happiness as such – a happy hermit’s genetic line will go extinct as the genes of a pair of anxious parents get carried on to the next generation. Happiness and misery play a role in evolution only to the extent that they encourage or discourage survival and reproduction. Perhaps it’s not surprising, then, that evolution has moulded us to be neither too miserable nor too happy. It enables us to enjoy a momentary rush of pleasant sensations, but these never last for ever. Sooner or later they subside and give place to unpleasant sensations.
For example, evolution provided pleasant feelings as rewards to males who spread their genes by having sex with fertile females. If sex were not accompanied by such pleasure, few males would bother. At the same time, evolution made sure that these pleasant feelings quickly subsided. If orgasms were to last for ever, the very happy males would die of hunger for lack of interest in food, and would not take the trouble to look for additional fertile females.
Some scholars compare human biochemistry to an air-conditioning system that keeps the temperature constant, come heatwave or snowstorm. Events might momentarily change the temperature, but the air-conditioning system always returns the temperature to the same set point.
Some air-conditioning systems are set at 70 degrees Fahrenheit. Others are set at twenty degrees. Human happiness conditioning systems also differ from person to person. On a scale from one to ten, some people are born with a cheerful biochemical system that allows their mood to swing between levels six and ten, stabilising with time at eight. Such a person is quite happy even if she lives in an alienating big city, loses all her money in a stock-exchange crash and is diagnosed with diabetes. Other people are cursed with a gloomy biochemistry that swings between three and seven and stabilises at five. Such an unhappy person remains depressed even if she enjoys the support of a tight-knit community, wins millions in the lottery and is as healthy as an Olympic athlete. Indeed, even if our gloomy friend wins $50,000,000 in the morning, discovers the cure for both AIDS and cancer by noon, makes peace between Israelis and Palestinians that afternoon, and then in the evening reunites with her long-lost child who disappeared years ago – she would still be incapable of experiencing anything beyond level seven happiness. Her brain is simply not built for exhilaration, come what may.
Think for a moment of your family and friends. You know some people who remain relatively joyful, no matter what befalls them. And then there are those who are always disgruntled, no matter what gifts the world lays at their feet. We tend to believe that if we could just change our workplace, get married, finish writing that novel, buy a new car or repay the mortgage, we would be on top of the world. Yet when we get what we desire we don’t seem to be any happier. Buying cars and writing novels do not change our biochemistry. They can startle it for a fleeting moment, but it is soon back to its set point.
How can this be squared with the above-mentioned psychological and sociological findings that, for example, married people are happier on average than singles? First, these findings are correlations – the direction of causation may be the opposite of what some researchers have assumed. It is true that married people are happier than singles and divorcees, but that does not necessarily mean that marriage produces happiness. It could be that happiness causes marriage. Or more correctly, that serotonin, dopamine and oxytocin bring about and maintain a marriage. People who are born with a cheerful biochemistry are generally happy and content. Such people are more attractive spouses, and consequently they have a greater chance of getting married. They are also less likely to divorce, because it is far easier to live with a happy and content spouse than with a depressed and dissatisfied one. Consequently, it’s true that married people are happier on average than singles, but a single woman prone to gloom because of her biochemistry would not necessarily become happier if she were to hook up with a husband.
In addition, most biologists are not fanatics. They maintain that happiness is determined mainly by biochemistry, but they agree that psychological and sociological factors also have their place. Our mental air-conditioning system has some freedom of movement within predetermined borders. It is almost impossible to exceed the upper and lower emotional boundaries, but marriage and divorce can have an impact in the area between the two. Somebody born with an average of level five happiness would never dance wildly in the streets. But a good marriage should enable her to enjoy level seven from time to time, and to avoid the despondency of level three.
If we accept the biological approach to happiness, then history turns out to be of minor importance, since most historical events have had no impact on our biochemistry. History can change the external stimuli that cause serotonin to be secreted, yet it does not change the resulting serotonin levels, and hence it cannot make people happier.
Compare a medieval French peasant to a modern Parisian banker. The peasant lived in an unheated mud hut overlooking the local pigsty, while the banker goes home to a splendid penthouse with all the latest technological gadgets and a view to the Champs-Elysées. Intuitively, we would expect the banker to be much happier than the peasant. However, mud huts, penthouses and the Champs-Elysées don’t really determine our mood. Serotonin does. When the medieval peasant completed the construction of his mud hut, his brain neurons secreted serotonin, bringing it up to level X. When in 2014 the banker made the last payment on his wonderful penthouse, brain neurons secreted a similar amount of serotonin, bringing it up to a similar level X. It makes no difference to the brain that the penthouse is far more comfortable than the mud hut. The only thing that matters is that at present the level of serotonin is X. Consequently the banker would not be one iota happier than his great-great-great-grandfather, the poor medieval peasant.
This is true not only of private lives, but also of great collective events. Take, for example, the French Revolution. The revolutionaries were busy: they executed the king, gave lands to the peasants, declared the rights of man, abolished noble privileges and waged war against the whole of Europe. Yet none of that changed French biochemistry. Consequently, despite all the political, social, ideological and economic upheavals brought about by the revolution, its impact on French happiness was small. Those who won a cheerful biochemistry in the genetic lottery were just as happy before the revolution as after. Those with a gloomy biochemistry complained about Robespierre and Napoleon with the same bitterness with which they earlier complained about Louis XVI and Marie Antoinette.
If so, what good was the French Revolution? If people did not become any happier, then what was the point of all that chaos, fear, blood and war? Biologists would never have stormed the Bastille. People think that this political revolution or that social reform will make them happy, but their biochemistry tricks them time and again.
There is only one historical development that has real significance. Today, when we finally realise that the keys to happiness are in the hands of our biochemical system, we can stop wasting our time on politics and social reforms, putsches and ideologies, and focus instead on the only thing that can make us truly happy: manipulating our biochemistry. If we invest billions in understanding our brain chemistry and developing appropriate treatments, we can make people far happier than ever before, without any need of revolutions. Prozac, for example, does not change regimes, but by raising serotonin levels it lifts people out of their depression.
Nothing captures the biological argument better than the famous New Age slogan: ‘Happiness Begins Within.’ Money, social status, plastic surgery, beautiful houses, powerful positions – none of these will bring you happiness. Lasting happiness comes only from serotonin, dopamine and oxytocin.1
In Aldous Huxley’s dystopian novel Brave New World, published in 1932 at the height of the Great Depression, happiness is the supreme value and psychiatric drugs replace the police and the ballot as the foundation of politics. Each day, each person takes a dose of ‘soma’, a synthetic drug which makes people happy without harming their productivity and efficiency. The World State that governs the entire globe is never threatened by wars, revolutions, strikes or demonstrations, because all people are supremely content with their current conditions, whatever they may be. Huxley’s vision of the future is far more troubling than George Orwell’s Nineteen Eighty-Four. Huxley’s world seems monstrous to most readers, but it is hard to explain why. Everybody is happy all the time – what could be wrong with that?
Huxley’s disconcerting world is based on the biological assumption that happiness equals pleasure. To be happy is no more and no less than experiencing pleasant bodily sensations. Since our biochemistry limits the volume and duration of these sensations, the only way to make people experience a high level of happiness over an extended period of time is to manipulate their biochemical system.
But that definition of happiness is contested by some scholars. In a famous study, Daniel Kahneman, winner of the Nobel Prize in economics, asked people to recount a typical work day, going through it episode by episode and evaluating how much they enjoyed or disliked each moment. He discovered what seems to be a paradox in most people’s view of their lives. Take the work involved in raising a child. Kahneman found that when counting moments of joy and moments of drudgery, bringing up a child turns out to be a rather unpleasant affair. It consists largely of changing nappies, washing dishes and dealing with temper tantrums, which nobody likes to do. Yet most parents declare that their children are their chief source of happiness. Does it mean that people don’t really know what’s good for them?
That’s one option. Another is that the findings demonstrate that happiness is not the surplus of pleasant over unpleasant moments. Rather, happiness consists in seeing one’s life in its entirety as meaningful and worthwhile. There is an important cognitive and ethical component to happiness. Our values make all the difference to whether we see ourselves as ‘miserable slaves to a baby dictator’ or as ‘lovingly nurturing a new life’.2 As Nietzsche put it, if you have a why to live, you can bear almost any how. A meaningful life can be extremely satisfying even in the midst of hardship, whereas a meaningless life is a terrible ordeal no matter how comfortable it is.
Though people in all cultures and eras have felt the same type of pleasures and pains, the meaning they have ascribed to their experiences has probably varied widely. If so, the history of happiness might have been far more turbulent than biologists imagine. It’s a conclusion that does not necessarily favour modernity. Assessing life minute by minute, medieval people certainly had it rough. However, if they believed the promise of everlasting bliss in the afterlife, they may well have viewed their lives as far more meaningful and worthwhile than modern secular people, who in the long term can expect nothing but complete and meaningless oblivion. Asked ‘Are you satisfied with your life as a whole?’, people in the Middle Ages might have scored quite highly in a subjective well-being questionnaire.
So our medieval ancestors were happy because they found meaning to life in collective delusions about the afterlife? Yes. As long as nobody punctured their fantasies, why shouldn’t they? As far as we can tell, from a purely scientific viewpoint, human life has absolutely no meaning. Humans are the outcome of blind evolutionary processes that operate without goal or purpose. Our actions are not part of some divine cosmic plan, and if planet Earth were to blow up tomorrow morning, the universe would probably keep going about its business as usual. As far as we can tell at this point, human subjectivity would not be missed. Hence any meaning that people ascribe to their lives is just a delusion. The other-worldly meanings medieval people found in their lives were no more deluded than the modern humanist, nationalist and capitalist meanings modern people find. The scientist who says her life is meaningful because she increases the store of human knowledge, the soldier who declares that his life is meaningful because he fights to defend his homeland, and the entrepreneur who finds meaning in building a new company are no less delusional than their medieval counterparts who found meaning in reading scriptures, going on a crusade or building a new cathedral.
So perhaps happiness is synchronising one’s personal delusions of meaning with the prevailing collective delusions. As long as my personal narrative is in line with the narratives of the people around me, I can convince myself that my life is meaningful, and find happiness in that conviction.
This is quite a depressing conclusion. Does happiness really depend on self-delusion?
If happiness is based on feeling pleasant sensations, then in order to be happier we need to re-engineer our biochemical system. If happiness is based on feeling that life is meaningful, then in order to be happier we need to delude ourselves more effectively. Is there a third alternative?
Both the above views share the assumption that happiness is some sort of subjective feeling (of either pleasure or meaning), and that in order to judge people’s happiness, all we need to do is ask them how they feel. To many of us, that seems logical because the dominant religion of our age is liberalism. Liberalism sanctifies the subjective feelings of individuals. It views these feelings as the supreme source of authority. What is good and what is bad, what is beautiful and what is ugly, what ought to be and what ought not to be, are all determined by what each one of us feels.
Liberal politics is based on the idea that the voters know best, and there is no need for Big Brother to tell us what is good for us. Liberal economics is based on the idea that the customer is always right. Liberal art declares that beauty is in the eye of the beholder. Students in liberal schools and universities are taught to think for themselves. Commercials urge us to ‘Just do it!’ Action films, stage dramas, soap operas, novels and catchy pop songs indoctrinate us constantly: ‘Be true to yourself’, ‘Listen to yourself’, ‘Follow your heart’. Jean-Jacques Rousseau stated this view most classically: ‘What I feel to be good – is good. What I feel to be bad – is bad.’
People who have been raised from infancy on a diet of such slogans are prone to believe that happiness is a subjective feeling and that each individual best knows whether she is happy or miserable. Yet this view is unique to liberalism. Most religions and ideologies throughout history stated that there are objective yardsticks for goodness and beauty, and for how things ought to be. They were suspicious of the feelings and preferences of the ordinary person. At the entrance of the temple of Apollo at Delphi, pilgrims were greeted by the inscription: ‘Know thyself!’ The implication was that the average person is ignorant of his true self, and is therefore likely to be ignorant of true happiness. Freud would probably concur.*
And so would Christian theologians. St Paul and St Augustine knew perfectly well that if you asked people about it, most of them would prefer to have sex than pray to God. Does that prove that having sex is the key to happiness? Not according to Paul and Augustine. It proves only that humankind is sinful by nature, and that people are easily seduced by Satan. From a Christian viewpoint, the vast majority of people are in more or less the same situation as heroin addicts. Imagine that a psychologist embarks on a study of happiness among drug users. He polls them and finds that they declare, every single one of them, that they are only happy when they shoot up. Would the psychologist publish a paper declaring that heroin is the key to happiness?
The idea that feelings are not to be trusted is not restricted to Christianity. At least when it comes to the value of feelings, even Darwin and Dawkins might find common ground with St Paul and St Augustine. According to the selfish gene theory, natural selection makes people, like other organisms, choose what is good for the reproduction of their genes, even if it is bad for them as individuals. Most males spend their lives toiling, worrying, competing and fighting, instead of enjoying peaceful bliss, because their DNA manipulates them for its own selfish aims. Like Satan, DNA uses fleeting pleasures to tempt people and place them in its power.
Most religions and philosophies have consequently taken a very different approach to happiness than liberalism does.3 The Buddhist position is particularly interesting. Buddhism has assigned the question of happiness more importance than perhaps any other human creed. For 2,500 years, Buddhists have systematically studied the essence and causes of happiness, which is why there is a growing interest among the scientific community both in their philosophy and their meditation practices.
Buddhism shares the basic insight of the biological approach to happiness, namely that happiness results from processes occurring within one’s body, and not from events in the outside world. However, starting from the same insight, Buddhism reaches very different conclusions.
According to Buddhism, most people identify happiness with pleasant feelings, while identifying suffering with unpleasant feelings. People consequently ascribe immense importance to what they feel, craving to experience more and more pleasures, while avoiding pain. Whatever we do throughout our lives, whether scratching our leg, fidgeting slightly in the chair, or fighting world wars, we are just trying to get pleasant feelings.
The problem, according to Buddhism, is that our feelings are no more than fleeting vibrations, changing every moment, like the ocean waves. If five minutes ago I felt joyful and purposeful, now these feelings are gone, and I might well feel sad and dejected. So if I want to experience pleasant feelings, I have to constantly chase them, while driving away the unpleasant feelings. Even if I succeed, I immediately have to start all over again, without ever getting any lasting reward for my troubles.
What is so important about obtaining such ephemeral prizes? Why struggle so hard to achieve something that disappears almost as soon as it arises? According to Buddhism, the root of suffering is neither the feeling of pain nor of sadness nor even of meaninglessness. Rather, the real root of suffering is this never-ending and pointless pursuit of ephemeral feelings, which causes us to be in a constant state of tension, restlessness and dissatisfaction. Due to this pursuit, the mind is never satisfied. Even when experiencing pleasure, it is not content, because it fears this feeling might soon disappear, and craves that this feeling should stay and intensify.
People are liberated from suffering not when they experience this or that fleeting pleasure, but rather when they understand the impermanent nature of all their feelings, and stop craving them. This is the aim of Buddhist meditation practices. In meditation, you are supposed to closely observe your mind and body, witness the ceaseless arising and passing of all your feelings, and realise how pointless it is to pursue them. When the pursuit stops, the mind becomes very relaxed, clear and satisfied. All kinds of feelings go on arising and passing – joy, anger, boredom, lust – but once you stop craving particular feelings, you can just accept them for what they are. You live in the present moment instead of fantasising about what might have been.
The resulting serenity is so profound that those who spend their lives in the frenzied pursuit of pleasant feelings can hardly imagine it. It is like a man standing for decades on the seashore, embracing certain ‘good’ waves and trying to prevent them from disintegrating, while simultaneously pushing back ‘bad’ waves to prevent them from getting near him. Day in, day out, the man stands on the beach, driving himself crazy with this fruitless exercise. Eventually, he sits down on the sand and just allows the waves to come and go as they please. How peaceful!
This idea is so alien to modern liberal culture that when Western New Age movements encountered Buddhist insights, they translated them into liberal terms, thereby turning them on their head. New Age cults frequently argue: ‘Happiness does not depend on external conditions. It depends only on what we feel inside. People should stop pursuing external achievements such as wealth and status, and connect instead with their inner feelings.’ Or more succinctly, ‘Happiness Begins Within.’ This is exactly what biologists argue, but more or less the opposite of what Buddha said.
Buddha agreed with modern biology and New Age movements that happiness is independent of external conditions. Yet his more important and far more profound insight was that true happiness is also independent of our inner feelings. Indeed, the more significance we give our feelings, the more we crave them, and the more we suffer. Buddha’s recommendation was to stop not only the pursuit of external achievements, but also the pursuit of inner feelings.
To sum up, subjective well-being questionnaires identify our well-being with our subjective feelings, and identify the pursuit of happiness with the pursuit of particular emotional states. In contrast, for many traditional philosophies and religions, such as Buddhism, the key to happiness is to know the truth about yourself – to understand who, or what, you really are. Most people wrongly identify themselves with their feelings, thoughts, likes and dislikes. When they feel anger, they think, ‘I am angry. This is my anger.’ They consequently spend their life avoiding some kinds of feelings and pursuing others. They never realise that they are not their feelings, and that the relentless pursuit of particular feelings just traps them in misery.
If this is so, then our entire understanding of the history of happiness might be misguided. Maybe it isn’t so important whether people’s expectations are fulfilled and whether they enjoy pleasant feelings. The main question is whether people know the truth about themselves. What evidence do we have that people today understand this truth any better than ancient foragers or medieval peasants?
Scholars began to study the history of happiness only a few years ago, and we are still formulating initial hypotheses and searching for appropriate research methods. It’s much too early to adopt rigid conclusions and end a debate that’s hardly yet begun. What is important is to get to know as many different approaches as possible and to ask the right questions.
Most history books focus on the ideas of great thinkers, the bravery of warriors, the charity of saints and the creativity of artists. They have much to tell about the weaving and unravelling of social structures, about the rise and fall of empires, about the discovery and spread of technologies. Yet they say nothing about how all this influenced the happiness and suffering of individuals. This is the biggest lacuna in our understanding of history. We had better start filling it.
THIS BOOK BEGAN BY PRESENTING HISTORY as the next stage in the continuum of physics to chemistry to biology. Sapiens are subject to the same physical forces, chemical reactions and natural-selection processes that govern all living beings. Natural selection may have provided Homo sapiens with a much larger playing field than it has given to any other organism, but the field has still had its boundaries. The implication has been that, no matter what their efforts and achievements, Sapiens are incapable of breaking free of their biologically determined limits.
But as the twenty-first century unfolds, this is no longer true: Homo sapiens is transcending those limits. It is now beginning to break the laws of natural selection, replacing them with the laws of intelligent design.
For close to 4 billion years, every single organism on the planet evolved subject to natural selection. Not even one was designed by an intelligent creator. The giraffe, for example, got its long neck thanks to competition between archaic giraffes rather than to the whims of a super-intelligent being. Proto-giraffes who had longer necks had access to more food and consequently produced more offspring than did those with shorter necks. Nobody, certainly not the giraffes, said, ‘A long neck would enable giraffes to munch leaves off the treetops. Let’s extend it.’ The beauty of Darwin’s theory is that it does not need to assume an intelligent designer to explain how giraffes ended up with long necks.
For billions of years, intelligent design was not even an option, because there was no intelligence which could design things. Microorganisms, which until quite recently were the only living things around, are capable of amazing feats. A microorganism belonging to one species can incorporate genetic codes from a completely different species into its cell and thereby gain new capabilities, such as resistance to antibiotics. Yet, as best we know, microorganisms have no consciousness, no aims in life, and no ability to plan ahead.
At some stage organisms such as giraffes, dolphins, chimpanzees and Neanderthals evolved consciousness and the ability to plan ahead. But even if a Neanderthal fantasised about fowls so fat and slow-moving that he could just scoop them up whenever he was hungry, he had no way of turning that fantasy into reality. He had to hunt the birds that had been naturally selected.
The first crack in the old regime appeared about 10,000 years ago, during the Agricultural Revolution. Sapiens who dreamed of fat, slow-moving chickens discovered that if they mated the fattest hen with the slowest cock, some of their offspring would be both fat and slow. If you mated those offspring with each other, you could produce a line of fat, slow birds. It was a race of chickens unknown to nature, produced by the intelligent design not of a god but of a human.
Still, compared to an all-powerful deity, Homo sapiens had limited design skills. Sapiens could use selective breeding to detour around and accelerate the natural-selection processes that normally affected chickens, but they could not introduce completely new characteristics that were absent from the genetic pool of wild chickens. In a way, the relationship between Homo sapiens and chickens was similar to many other symbiotic relationships that have so often arisen on their own in nature. Sapiens exerted peculiar selective pressures on chickens that caused the fat and slow ones to proliferate, just as pollinating bees select flowers, causing the bright colourful ones to proliferate.
Today, the 4-billion-year-old regime of natural selection is facing a completely different challenge. In laboratories throughout the world, scientists are engineering living beings. They break the laws of natural selection with impunity, unbridled even by an organism’s original characteristics. Eduardo Kac, a Brazilian bio-artist, decided in 2000 to create a new work of art: a fluorescent green rabbit. Kac contacted a French laboratory and offered it a fee to engineer a radiant bunny according to his specifications. The French scientists took a run-of-the-mill white rabbit embryo, implanted in its DNA a gene taken from a green fluorescent jellyfish, and voilà! One green fluorescent rabbit for le monsieur. Kac named the rabbit Alba.
It is impossible to explain the existence of Alba through the laws of natural selection. She is the product of intelligent design. She is also a harbinger of things to come. If the potential Alba signifies is realised in full – and if humankind doesn’t annihilate itself meanwhile – the Scientific Revolution might prove itself far greater than a mere historical revolution. It may turn out to be the most important biological revolution since the appearance of life on earth. After 4 billion years of natural selection, Alba stands at the dawn of a new cosmic era, in which life will be ruled by intelligent design. If this happens, the whole of human history up to that point might, with hindsight, be reinterpreted as a process of experimentation and apprenticeship that revolutionised the game of life. Such a process should be understood from a cosmic perspective of billions of years, rather than from a human perspective of millennia.
Biologists the world over are locked in battle with the intelligent-design movement, which opposes the teaching of Darwinian evolution in schools and claims that biological complexity proves there must be a creator who thought out all biological details in advance. The biologists are right about the past, but the proponents of intelligent design might, ironically, be right about the future.
At the time of writing, the replacement of natural selection by intelligent design could happen in any of three ways: through biological engineering, cyborg engineering (cyborgs are beings that combine organic with non-organic parts) or the engineering of inorganic life.
Biological engineering is deliberate human intervention on the biological level (e.g. implanting a gene) aimed at modifying an organism’s shape, capabilities, needs or desires, in order to realise some preconceived cultural idea, such as the artistic predilections of Eduardo Kac.
There is nothing new about biological engineering, per se. People have been using it for millennia in order to reshape themselves and other organisms. A simple example is castration. Humans have been castrating bulls for perhaps 10,000 years in order to create oxen. Oxen are less aggressive, and are thus easier to train to pull ploughs. Humans also castrated their own young males to create soprano singers with enchanting voices and eunuchs who could safely be entrusted with overseeing the sultan’s harem.
But recent advances in our understanding of how organisms work, down to the cellular and nuclear levels, have opened up previously unimaginable possibilities. For instance, we can today not merely castrate a man, but also change his sex through surgical and hormonal treatments. But that’s not all. Consider the surprise, disgust and consternation that ensued when, in 1996, the following photograph appeared in newspapers and on television:
47. A mouse on whose back scientists grew an ‘ear’ made of cattle cartilage cells. It is an eerie echo of the lion-man statue from the Stadel Cave. Thirty thousand years ago, humans were already fantasising about combining different species. Today, they can actually produce such chimeras.
{Photo and © Charles Vacanti.}
No, Photoshop was not involved. It’s an untouched photo of a real mouse on whose back scientists implanted cattle cartilage cells. The scientists were able to control the growth of the new tissue, shaping it in this case into something that looks like a human ear. The process may soon enable scientists to manufacture artificial ears, which could then be implanted in humans.1
Even more remarkable wonders can be performed with genetic engineering, which is why it raises a host of ethical, political and ideological issues. And it’s not just pious monotheists who object that man should not usurp God’s role. Many confirmed atheists are no less shocked by the idea that scientists are stepping into nature’s shoes. Animal-rights activists decry the suffering caused to lab animals in genetic engineering experiments, and to the farmyard animals that are engineered in complete disregard of their needs and desires. Human-rights activists are afraid that genetic engineering might be used to create supermen who will make serfs of the rest of us. Jeremiahs offer apocalyptic visions of bio-dictatorships that will clone fearless soldiers and obedient workers. The prevailing feeling is that too many opportunities are opening too quickly and that our ability to modify genes is outpacing our capacity for making wise and farsighted use of the skill.
The result is that we’re at present using only a fraction of the potential of genetic engineering. Most of the organisms now being engineered are those with the weakest political lobbies – plants, fungi, bacteria and insects. For example, lines of E. coli, a bacterium that lives symbiotically in the human gut (and which makes headlines when it gets out of the gut and causes deadly infections), have been genetically engineered to produce biofuel.2 E. coli and several species of fungi have also been engineered to produce insulin, thereby lowering the cost of diabetes treatment.3 A gene extracted from an Arctic fish has been inserted into potatoes, making the plants more frost-resistant.4
A few mammals have also been subject to genetic engineering. Every year the dairy industry suffers billions of dollars in damages due to mastitis, a disease that strikes dairy-cow udders. Scientists are currently experimenting with genetically engineered cows whose milk contains lysostaphin, a biochemical that attacks the bacteria responsible for the disease.5 The pork industry, which has suffered from falling sales because consumers are wary of the unhealthy fats in ham and bacon, has hopes for a still-experimental line of pigs implanted with genetic material from a worm. The new genes cause the pigs to turn bad omega 6 fatty acid into its healthy cousin, omega 3.6
The next generation of genetic engineering will make pigs with good fat look like child’s play. Geneticists have managed not merely to extend sixfold the average life expectancy of worms, but also to engineer genius mice that display much-improved memory and learning skills.7 Voles are small, stout rodents resembling mice, and most varieties of voles are promiscuous. But there is one species in which boy and girl voles form lasting and monogamous relationships. Geneticists claim to have isolated the genes responsible for vole monogamy. If the addition of a gene can turn a vole Don Juan into a loyal and loving husband, are we far off from being able to genetically engineer not only the individual abilities of rodents (and humans), but also their social structures?8
But geneticists do not only want to transform living lineages. They aim to revive extinct creatures as well. And not just dinosaurs, as in the Hollywood blockbuster Jurassic Park. A team of Russian, Japanese and Korean scientists has recently mapped the genome of ancient mammoths, found frozen in the Siberian ice. They now plan to take a fertilised egg-cell of a present-day elephant, replace the elephantine DNA with a reconstructed mammoth DNA, and implant the egg in the womb of an elephant. After about twenty-two months, they expect the first mammoth in 5,000 years to be born.9
But why stop at mammoths? Professor George Church of Harvard University recently suggested that, with the completion of the Neanderthal Genome Project, we can now implant reconstructed Neanderthal DNA into a Sapiens ovum, thus producing the first Neanderthal child in 30,000 years. Church claimed that he could do the job for a paltry $30 million. Several women have already volunteered to serve as surrogate mothers.10
What do we need Neanderthals for? Some argue that if we could study live Neanderthals, we could answer some of the most nagging questions about the origins and uniqueness of Homo sapiens. By comparing a Neanderthal to a Homo sapiens brain, and mapping out where their structures differ, perhaps we could identify what biological change produced consciousness as we experience it. There’s an ethical reason, too – some have argued that if Homo sapiens was responsible for the extinction of the Neanderthals, it has a moral duty to resurrect them. And having some Neanderthals around might be useful. Lots of industrialists would be glad to pay one Neanderthal to do the menial work of two Sapiens.
But why stop even at Neanderthals? Why not go back to God’s drawing board and design a better Sapiens? The abilities, needs and desires of Homo sapiens have a genetic basis, and the Sapiens genome is no more complex than that of voles and mice. (The mouse genome contains about 2.5 billion nucleobases, the Sapiens genome about 2.9 billion bases – meaning the latter is only 14 per cent larger.)11 In the medium range – perhaps in a few decades – genetic engineering and other forms of biological engineering might enable us to make far-reaching alterations not only to our physiology, immune system and life expectancy, but also to our intellectual and emotional capacities. If genetic engineering can create genius mice, why not genius humans? If it can create monogamous voles, why not humans hard-wired to remain faithful to their partners?
The Cognitive Revolution that turned Homo sapiens from an insignificant ape into the master of the world did not require any noticeable change in physiology or even in the size and external shape of the Sapiens brain. It apparently involved no more than a few small changes to internal brain structure. Perhaps another small change would be enough to ignite a Second Cognitive Revolution, create a completely new type of consciousness, and transform Homo sapiens into something altogether different.
True, we still don’t have the acumen to achieve this, but there seems to be no insurmountable technical barrier preventing us from producing superhumans. The main obstacles are the ethical and political objections that have slowed down research on humans. And no matter how convincing the ethical arguments may be, it is hard to see how they can hold back the next step for long, especially if what is at stake is the possibility of prolonging human life indefinitely, conquering incurable diseases, and upgrading our cognitive and emotional abilities.
What would happen, for example, if we developed a cure for Alzheimer’s disease that, as a side benefit, could dramatically improve the memories of healthy people? Would anyone be able to halt the relevant research? And when the cure is developed, could any law enforcement agency limit it to Alzheimer’s patients and prevent healthy people from using it to acquire super-memories?
It’s unclear whether bioengineering could really resurrect the Neanderthals, but it would very likely bring down the curtain on Homo sapiens. Tinkering with our genes won’t necessarily kill us. But we might fiddle with Homo sapiens to such an extent that we would no longer be Homo sapiens.
There is another new technology which could change the laws of life: cyborg engineering. Cyborgs are beings which combine organic and inorganic parts, such as a human with bionic hands. In a sense, nearly all of us are bionic these days, since our natural senses and functions are supplemented by devices such as eyeglasses, pacemakers, orthotics, and even computers and mobile phones (which relieve our brains of some of their data storage and processing burdens). We stand poised on the brink of becoming true cyborgs, of having inorganic features that are inseparable from our bodies, features that modify our abilities, desires, personalities and identities.
The Defense Advanced Research Projects Agency (DARPA), a US military research agency, is developing cyborgs out of insects. The idea is to implant electronic chips, detectors and processors in the body of a fly or cockroach, which will enable either a human or an automatic operator to control the insect’s movements remotely and to absorb and transmit information. Such a fly could be sitting on the wall at enemy headquarters, eavesdrop on the most secret conversations, and if it isn’t caught first by a spider, could inform us exactly what the enemy is planning.12 In 2006 the US Naval Undersea Warfare Center reported its intention to develop cyborg sharks, declaring, ‘NUWC is developing a fish tag whose goal is behaviour control of host animals via neural implants.’ The developers hope to identify underwater electromagnetic fields made by submarines and mines, by exploiting the natural magnetic detecting capabilities of sharks, which are superior to those of any man-made detectors.13
Sapiens, too, are being turned into cyborgs. The newest generation of hearing aids are sometimes referred to as ‘bionic ears’. The device consists of an implant that absorbs sound through a microphone located in the outer part of the ear. The implant filters the sounds, identifies human voices, and translates them into electric signals that are sent directly to the central auditory nerve and from there to the brain.14
Retina Implant, a government-sponsored German company, is developing a retinal prosthesis that may allow blind people to gain partial vision. It involves implanting a small microchip inside the patient’s eye. Photocells absorb light falling on the eye and transform it into electrical energy, which stimulates the intact nerve cells in the retina. The nervous impulses from these cells stimulate the brain, where they are translated into sight. At present the technology allows patients to orientate themselves in space, identify letters, and even recognise faces.15
Jesse Sullivan, an American electrician, lost both arms up to the shoulder in a 2001 accident. Today he uses two bionic arms, courtesy of the Rehabilitation Institute of Chicago. The special feature of Jesse’s new arms is that they are operated by thought alone. Neural signals arriving from Jesse’s brain are translated by micro-computers into electrical commands, and the arms move. When Jesse wants to raise his arm, he does what any normal person unconsciously does – and the arm rises. These arms can perform a much more limited range of movements than organic arms, but they enable Jesse to carry out simple daily functions. A similar bionic arm has recently been outfitted for Claudia Mitchell, an American soldier who lost her arm in a motorcycle accident. Scientists believe that we will soon have bionic arms that will not only move when willed to move, but will also be able to transmit signals back to the brain, thereby enabling amputees to regain even the sensation of touch!16
48. Jesse Sullivan and Claudia Mitchell holding hands. The amazing thing about their bionic arms is that they are operated by thought.
{© ImageBank/Getty Images Israel.}
At present these bionic arms are a poor replacement for our organic originals, but they have the potential for unlimited development. Bionic arms, for example, can be made far more powerful than their organic kin, making even a boxing champion feel like a weakling. Moreover, bionic arms have the advantage that they can be replaced every few years, or detached from the body and operated at a distance.
Scientists at Duke University in North Carolina have recently demonstrated this with rhesus monkeys whose brains have been implanted with electrodes. The electrodes gather signals from the brain and transmit them to external devices. The monkeys have been trained to control detached bionic arms and legs through thought alone. One monkey, named Aurora, learned to thought-control a detached bionic arm while simultaneously moving her two organic arms. Like some Hindu goddess, Aurora now has three arms, and her arms can be located in different rooms – or even cities. She can sit in her North Carolina lab, scratch her back with one hand, scratch her head with a second hand, and simultaneously steal a banana in New York (although the ability to eat a purloined fruit at a distance remains a dream). Another rhesus monkey, Idoya, won world fame in 2008 when she thought-controlled a pair of bionic legs in Kyoto, Japan, from her North Carolina chair. The legs were twenty times Idoya’s weight.17
Locked-in syndrome is a condition in which a person loses all or nearly all her ability to move any part of her body, while her cognitive abilities remain intact. Patients suffering from the syndrome have up till now been able to communicate with the outside world only through small eye movements. However, a few patients have had brain-signal-gathering electrodes implanted in their brains. Efforts are being made to translate such signals not merely into movements but also into words. If the experiments succeed, locked-in patients could finally speak directly with the outside world, and we might eventually be able to use the technology to read other people’s minds.18
Yet of all the projects currently under development, the most revolutionary is the attempt to devise a direct two-way brain-computer interface that will allow computers to read the electrical signals of a human brain, simultaneously transmitting signals that the brain can read in turn. What if such interfaces are used to directly link a brain to the Internet, or to directly link several brains to each other, thereby creating a sort of Inter-brain-net? What might happen to human memory, human consciousness and human identity if the brain has direct access to a collective memory bank? In such a situation, one cyborg could, for example, retrieve the memories of another – not hear about them, not read about them in an autobiography, not imagine them, but directly remember them as if they were his own. Or her own. What happens to concepts such as the self and gender identity when minds become collective? How could you know thyself or follow your dream if the dream is not in your mind but in some collective reservoir of aspirations?
Such a cyborg would no longer be human, or even organic. It would be something completely different. It would be so fundamentally another kind of being that we cannot even grasp the philosophical, psychological or political implications.
The third way to change the laws of life is to engineer completely inorganic beings. The most obvious examples are computer programs and computer viruses that can undergo independent evolution.
The field of genetic programming is today one of the most interesting spots in the computer science world. It tries to emulate the methods of genetic evolution. Many programmers dream of creating a program that could learn and evolve completely independently of its creator. In this case, the programmer would be a primum mobile, a first mover, but his creation would be free to evolve in directions neither its maker nor any other human could ever have envisaged.
A prototype for such a program already exists – it’s called a computer virus. As it spreads through the Internet, the virus replicates itself millions upon millions of times, all the while being chased by predatory antivirus programs and competing with other viruses for a place in cyberspace. One day when the virus replicates itself a mistake occurs – a computerised mutation. Perhaps the mutation occurs because the human engineer programmed the virus to make occasional random replication mistakes. Perhaps the mutation was due to a random error. If, by chance, the modified virus is better at evading antivirus programs without losing its ability to invade other computers, it will spread through cyberspace. If so, the mutants will survive and reproduce. As time goes by, cyberspace would be full of new viruses that nobody engineered, and that undergo non-organic evolution.
Are these living creatures? It depends on what you mean by ‘living creatures’. They have certainly been produced by a new evolutionary process, completely independent of the laws and limitations of organic evolution.
Imagine another possibility – suppose you could back up your brain to a portable hard drive and then run it on your laptop. Would your laptop be able to think and feel just like a Sapiens? If so, would it be you or someone else? What if computer programmers could create an entirely new but digital mind, composed of computer code, complete with a sense of self, consciousness and memory? If you ran the program on your computer, would it be a person? If you deleted it could you be charged with murder?
We might soon have the answer to such questions. The Human Brain Project, founded in 2005, hopes to recreate a complete human brain inside a computer, with electronic circuits in the computer emulating neural networks in the brain. The project’s director has claimed that, if funded properly, within a decade or two we could have an artificial human brain inside a computer that could talk and behave very much as a human does. If successful, that would mean that after 4 billion years of milling around inside the small world of organic compounds, life will suddenly break out into the vastness of the inorganic realm, ready to take up shapes beyond our wildest dreams. Not all scholars agree that the mind works in a manner analogous to today’s digital computers – and if it doesn’t, present-day computers would not be able to simulate it. Yet it would be foolish to categorically dismiss the possibility before giving it a try. In 2013 the project received a grant of €1 billion from the European Union.19
Presently, only a tiny fraction of these new opportunities have been realised. Yet the world of 2014 is already a world in which culture is releasing itself from the shackles of biology. Our ability to engineer not merely the world around us, but above all the world inside our bodies and minds, is developing at breakneck speed. More and more spheres of activity are being shaken out of their complacent ways. Lawyers need to rethink issues of privacy and identity; governments are faced with rethinking matters of health care and equality; sports associations and educational institutions need to redefine fair play and achievement; pension funds and labour markets should readjust to a world in which sixty might be the new thirty. They must all deal with the conundrums of bioengineering, cyborgs and inorganic life.
Mapping the first human genome required fifteen years and $3 billion. Today you can map a person’s DNA within a few weeks and at the cost of a few hundred dollars.20 The era of personalised medicine – medicine that matches treatment to DNA – has begun. The family doctor could soon tell you with greater certainty that you face high risks of liver cancer, whereas you needn’t worry too much about heart attacks. She could determine that a popular medication that helps 92 per cent of people is useless to you, and you should instead take another pill, fatal to many people but just right for you. The road to near-perfect medicine stands before us.
However, with improvements in medical knowledge will come new ethical conundrums. Ethicists and legal experts are already wrestling with the thorny issue of privacy as it relates to DNA. Would insurance companies be entitled to ask for our DNA scans and to raise premiums if they discover a genetic tendency to reckless behaviour? Would we be required to fax our DNA, rather than our CV, to potential employers? Could an employer favour a candidate because his DNA looks better? Or could we sue in such cases for ‘genetic discrimination’? Could a company that develops a new creature or a new organ register a patent on its DNA sequences? It is obvious that one can own a particular chicken, but can one own an entire species?
Such dilemmas are dwarfed by the ethical, social and political implications of the Gilgamesh Project and of our potential new abilities to create superhumans. The Universal Declaration of Human Rights, government medical programmes throughout the world, national health insurance programmes and national constitutions worldwide recognise that a humane society ought to give all its members fair medical treatment and keep them in relatively good health. That was all well and good as long as medicine was chiefly concerned with preventing illness and healing the sick. What might happen once medicine becomes preoccupied with enhancing human abilities? Would all humans be entitled to such enhanced abilities, or would there be a new superhuman elite?
Our late modern world prides itself on recognising, for the first time in history, the basic equality of all humans, yet it might be poised to create the most unequal of all societies. Throughout history, the upper classes always claimed to be smarter, stronger and generally better than the underclass. They were usually deluding themselves. A baby born to a poor peasant family was likely to be as intelligent as the crown prince. With the help of new medical capabilities, the pretensions of the upper classes might soon become an objective reality.
This is not science fiction. Most science-fiction plots describe a world in which Sapiens – identical to us – enjoy superior technology such as light-speed spaceships and laser guns. The ethical and political dilemmas central to these plots are taken from our own world, and they merely recreate our emotional and social tensions against a futuristic backdrop. Yet the real potential of future technologies is to change Homo sapiens itself, including our emotions and desires, and not merely our vehicles and weapons. What is a spaceship compared to an eternally young cyborg who does not breed and has no sexuality, who can share thoughts directly with other beings, whose abilities to focus and remember are a thousand times greater than our own, and who is never angry or sad, but has emotions and desires that we cannot begin to imagine?
Science fiction rarely describes such a future, because an accurate description is by definition incomprehensible. Producing a film about the life of some super-cyborg is akin to producing Hamlet for an audience of Neanderthals. Indeed, the future masters of the world will probably be more different from us than we are from Neanderthals. Whereas we and the Neanderthals are at least human, our inheritors will be godlike.
Physicists define the Big Bang as a singularity. It is a point at which all the known laws of nature did not exist. Time too did not exist. It is thus meaningless to say that anything existed ‘before’ the Big Bang. We may be fast approaching a new singularity, when all the concepts that give meaning to our world – me, you, men, women, love and hate – will become irrelevant. Anything happening beyond that point is meaningless to us.
In 1818 Mary Shelley published Frankenstein, the story of a scientist who tries to create a superior being and instead creates a monster. In the last two centuries, this story has been told over and over again in countless variations. It has become a central pillar of our new scientific mythology. At first sight, the Frankenstein story appears to warn us that if we try to play God and engineer life we will be punished severely. Yet the story has a deeper meaning.
The Frankenstein myth confronts Homo sapiens with the fact that the last days are fast approaching. Unless some nuclear or ecological catastrophe intervenes, so goes the story, the pace of technological development will soon lead to the replacement of Homo sapiens by completely different beings who possess not only different physiques, but also very different cognitive and emotional worlds. This is something most Sapiens find extremely disconcerting. We like to believe that in the future people just like us will travel from planet to planet in fast spaceships. We don’t like to contemplate the possibility that in the future, beings with emotions and identities like ours will no longer exist, and our place will be taken by alien life forms whose abilities dwarf our own.
We seek comfort in the fantasy that Dr Frankenstein can create only terrible monsters, whom we would have to destroy in order to save the world. We like to tell the story that way because it implies that we are the best of all beings, that there never was and never will be something better than us. Any attempt to improve us will inevitably fail, because even if our bodies might be improved, you cannot touch the human spirit.
We would have a hard time swallowing the fact that scientists could engineer spirits as well as bodies, and that future Dr Frankensteins could therefore create something truly superior to us, something that will look at us as condescendingly as we look at the Neanderthals.
We cannot be certain whether today’s Frankensteins will indeed fulfil this prophecy. The future is unknown, and it would be surprising if the forecasts of the last few pages were realised in full. History teaches us that what seems to be just around the corner may never materialise due to unforeseen barriers, and that other unimagined scenarios will in fact come to pass. When the nuclear age erupted in the 1940s, many forecasts were made about the future nuclear world of the year 2000. When sputnik and Apollo II fired the imagination of the world, everyone began predicting that by the end of the century, people would be living in space colonies on Mars and Pluto. Few of these forecasts came true. On the other hand, nobody foresaw the Internet.
So don’t go out just yet to buy liability insurance to indemnify you against lawsuits filed by digital beings. The above fantasies – or nightmares – are just stimulants for your imagination. What we should take seriously is the idea that the next stage of history will include not only technological and organisational transformations, but also fundamental transformations in human consciousness and identity. And these could be transformations so fundamental that they will call the very term ‘human’ into question. How long do we have? No one really knows. As already mentioned, some say that by 2050 a few humans will already be a-mortal. Less radical forecasts speak of the next century, or the next millennium. Yet from the perspective of 70,000 years of Sapiens history, what are a few millennia?
If the curtain is indeed about to drop on Sapiens history, we members of one of its final generations should devote some time to answering one last question: what do we want to become? This question, sometimes known as the Human Enhancement question, dwarfs the debates that currently preoccupy politicians, philosophers, scholars and ordinary people. After all, today’s debate between today’s religions, ideologies, nations and classes will in all likelihood disappear along with Homo sapiens. If our successors indeed function on a different level of consciousness (or perhaps possess something beyond consciousness that we cannot even conceive), it seems doubtful that Christianity or Islam will be of interest to them, that their social organisation could be Communist or capitalist, or that their genders could be male or female.
And yet the great debates of history are important because at least the first generation of these gods would be shaped by the cultural ideas of their human designers. Would they be created in the image of capitalism, of Islam, or of feminism? The answer to this question might send them careening in entirely different directions.
Most people prefer not to think about it. Even the field of bioethics prefers to address another question, ‘What is it forbidden to do?’ Is it acceptable to carry out genetic experiments on living human beings? On aborted fetuses? On stem cells? Is it ethical to clone sheep? And chimpanzees? And what about humans? All of these are important questions, but it is naïve to imagine that we might simply hit the brakes and stop the scientific projects that are upgrading Homo sapiens into a different kind of being. For these projects are inextricably meshed together with the Gilgamesh Project. Ask scientists why they study the genome, or try to connect a brain to a computer, or try to create a mind inside a computer. Nine out of ten times you’ll get the same standard answer: we are doing it to cure diseases and save human lives. Even though the implications of creating a mind inside a computer are far more dramatic than curing psychiatric illnesses, this is the standard justification given, because nobody can argue with it. This is why the Gilgamesh Project is the flagship of science. It serves to justify everything science does. Dr Frankenstein piggybacks on the shoulders of Gilgamesh. Since it is impossible to stop Gilgamesh, it is also impossible to stop Dr Frankenstein.
The only thing we can try to do is to influence the direction scientists are taking. But since we might soon be able to engineer our desires too, the real question facing us is not ‘What do we want to become?’, but ‘What do we want to want?’ Those who are not spooked by this question probably haven’t given it enough thought.
SEVENTY THOUSAND YEARS AGO, HOMO sapiens was still an insignificant animal minding its own business in a corner of Africa. In the following millennia it transformed itself into the master of the entire planet and the terror of the ecosystem. Today it stands on the verge of becoming a god, poised to acquire not only eternal youth, but also the divine abilities of creation and destruction.
Unfortunately, the Sapiens regime on earth has so far produced little that we can be proud of. We have mastered our surroundings, increased food production, built cities, established empires and created far-flung trade networks. But did we decrease the amount of suffering in the world? Time and again, massive increases in human power did not necessarily improve the well-being of individual Sapiens, and usually caused immense misery to other animals.
In the last few decades we have at last made some real progress as far as the human condition is concerned, with the reduction of famine, plague and war. Yet the situation of other animals is deteriorating more rapidly than ever before, and the improvement in the lot of humanity is too recent and fragile to be certain of.
Moreover, despite the astonishing things that humans are capable of doing, we remain unsure of our goals and we seem to be as discontented as ever. We have advanced from canoes to galleys to steamships to space shuttles – but nobody knows where we’re going. We are more powerful than ever before, but have very little idea what to do with all that power. Worse still, humans seem to be more irresponsible than ever. Self-made gods with only the laws of physics to keep us company, we are accountable to no one. We are consequently wreaking havoc on our fellow animals and on the surrounding ecosystem, seeking little more than our own comfort and amusement, yet never finding satisfaction.
Is there anything more dangerous than dissatisfied and irresponsible gods who don’t know what they want?
1 An Animal of No Significance
1. Ann Gibbons, ‘Food for Thought: Did the First Cooked Meals Help Fuel the Dramatic Evolutionary Expansion of the Human Brain?’, Science 316:5831 (2007), 1,558–60.
2 The Tree of Knowledge
1. Robin Dunbar, Grooming, Gossip and the Evolution of Language (Cambridge, Mass.: Harvard University Press, 1998).
2. Frans de Waal, Chimpanzee Politics: Power and Sex among Apes (Baltimore: Johns Hopkins University Press, 2000); Frans de Waal, Our Inner Ape: A Leading Primatologist Explains Why We Are Who We Are (New York: Riverhead Books, 2005); Michael L. Wilson and Richard W. Wrangham, ‘Intergroup Relations in Chimpanzees’, Annual Review of Anthropology 32 (2003), 363–92; M. McFarland Symington, ‘Fission-Fusion Social Organization in Ateles and Pan’, International Journal of Primatology 11:1 (1990), 49; Colin A. Chapman and Lauren J. Chapman, ‘Determinants of Groups Size in Primates: The Importance of Travel Costs’, in On the Move: How and Why Animals Travel in Groups, ed. Sue Boinsky and Paul A. Garber (Chicago: University of Chicago Press, 2000), 26.
3. Dunbar, Grooming, Gossip and the Evolution of Language, 69–79; Leslie C. Aiello and R. I. M. Dunbar, ‘Neocortex Size, Group Size, and the Evolution of Language’, Current Anthropology 34:2 (1993), 189. For criticism of this approach see: Christopher McCarthy et al., ‘Comparing Two Methods for Estimating Network Size’, Human Organization 60:1 (2001), 32; R. A. Hill and R. I. M. Dunbar, ‘Social Network Size in Humans’, Human Nature 14:1 (2003), 65.
4. Yvette Taborin, ‘Shells of the French Aurignacian and Perigordian’, in Before Lascaux: The Complete Record of the Early Upper Paleolithic, ed. Heidi Knecht, Anne Pike-Tay and Randall White (Boca Raton: CRC Press, 1993), 211–28.
5. G. R. Summerhayes, ‘Application of PIXE-PIGME to Archaeological Analysis of Changing Patterns of Obsidian Use in West New Britain, Papua New Guinea’, in Archaeological Obsidian Studies: Method and Theory, ed. Steven M. Shackley (New York: Plenum Press, 1998), 129–58.
3 A Day in the Life of Adam and Eve
1. Christopher Ryan and Cacilda Jethá, Sex at Dawn: The Prehistoric Origins of Modern Sexuality (New York: Harper, 2010); S. Beckerman and P. Valentine (eds.), Cultures of Multiple Fathers. The Theory and Practice of Partible Paternity in Lowland South America (Gainesville: University Press of Florida, 2002).
2. Noel G. Butlin, Economics and the Dreamtime: A Hypothetical History (Cambridge: Cambridge University Press, 1993), 98–101; Richard Broome, Aboriginal Australians (Sydney: Allen & Unwin, 2002), 15; William Howell Edwards, An Introduction to Aboriginal Societies (Wentworth Falls, NSW: Social Science Press, 1988), 52.
3. Fekri A. Hassan, Demographic Archaeology (New York: Academic Press, 1981), 196–9; Lewis Robert Binford, Constructing Frames of Reference: An Analytical Method for Archaeological Theory Building Using Hunter-Gatherer and Environmental Data Sets (Berkeley: University of California Press, 2001), 143.
4. Brian Hare, The Genius of Dogs: How Dogs Are Smarter Than You Think (Dutton: Penguin Group, 2013).
5. Christopher B. Ruff, Erik Trinkaus and Trenton W. Holliday, ‘Body Mass and Encephalization in Pleistocene Homo’, Nature 387 (1997), 173–6; M. Henneberg and M. Steyn, ‘Trends in Cranial Capacity and Cranial Index in Subsaharan Africa During the Holocene’, American Journal of Human Biology 5:4 (1993): 473–9; Drew H. Bailey and David C. Geary, ‘Hominid Brain Evolution: Testing Climatic, Ecological and Social Competition Models’, Human Nature 20 (2009): 67–79; Daniel J. Wescott and Richard L. Jantz, ‘Assessing Craniofacial Secular Change in American Blacks and Whites Using Geometric Morphometry’, in Modern Morphometrics in Physical Anthropology: Developments in Primatology: Progress and Prospects, ed. Dennis E. Slice (New York: Plenum Publishers, 2005), 231–45.
6. Nicholas G. Blurton Jones et al., ‘Antiquity of Postreproductive Life: Are There Modern Impacts on Hunter-Gatherer Postreproductive Life Spans?’, American Journal of Human Biology 14 (2002), 184–205.
7. Kim Hill and A. Magdalena Hurtado, Aché Life History: The Ecology and Demography of a Foraging People (New York: Aldine de Gruyter, 1996), 164, 236.
8. Ibid., 78.
9. Vincenzo Formicola and Alexandra P. Buzhilova, ‘Double Child Burial from Sunghir (Russia): Pathology and Inferences for Upper Paleolithic Funerary Practices’, American Journal of Physical Anthropology 124:3 (2004), 189–98; Giacomo Giacobini, ‘Richness and Diversity of Burial Rituals in the Upper Paleolithic’, Diogenes 54:2 (2007), 19–39.
10. I. J. N. Thorpe, ‘Anthropology, Archaeology and the Origin of Warfare’, World Archaeology 35:1 (2003), 145–65; Raymond C. Kelly, Warless Societies and the Origin of War (Ann Arbor: University of Michigan Press, 2000); Azar Gat, War in Human Civilization (Oxford: Oxford University Press, 2006); Lawrence H. Keeley, War before Civilization: The Myth of the Peaceful Savage (Oxford: Oxford University Press, 1996); Slavomil Vencl, ‘Stone Age Warfare’, in Ancient Warfare: Archaeological Perspectives, ed. John Carman and Anthony Harding (Stroud: Sutton Publishing, 1999), 57–73.
4 The Flood
1. James F. O’Connel and Jim Allen, ‘Pre-LGM Sahul (Pleistocene Australia – New Guinea) and the Archaeology of Early Modern Humans’, in Rethinking the Human Revolution: New Behavioural and Biological Perspectives on the Origin and Dispersal of Modern Humans, ed. Paul Mellars, Ofer Bar-Yosef, Katie Boyle (Cambridge: McDonald Institute for Archaeological Research, 2007), 395–410; James F. O’Connel and Jim Allen, ‘When Did Humans First Arrive in Greater Australia and Why is it Important to Know?’, Evolutionary Anthropology 6:4 (1998), 132–46; James F. O’Connel and Jim Allen, ‘Dating the Colonization of Sahul (Pleistocene Australia – New Guinea): A Review of Recent Research’, Journal of Radiological Science 31:6 (2004), 835–53; Jon M. Erlandson, ‘Anatomically Modern Humans, Maritime Voyaging and the Pleistocene Colonization of the Americas’, in The First Americans: The Pleistocene Colonization of the New World, ed. Nina G. Jablonski (San Francisco: University of California Press, 2002), 59–60, 63–4; Jon M. Erlandson and Torben C. Rick, ‘Archaeology Meets Marine Ecology: The Antiquity of Maritime Cultures and Human Impacts on Marine Fisheries and Ecosystems’, Annual Review of Marine Science 2 (2010), 231–51; Atholl Anderson, ‘Slow Boats from China: Issues in the Prehistory of Indo-China Seafaring’, Modern Quaternary Research in Southeast Asia 16 (2000), 13–50; Robert G. Bednarik, ‘Maritime Navigation in the Lower and Middle Paleolithic’, Earth and Planetary Sciences 328 (1999), 559–60; Robert G. Bednarik, ‘Seafaring in the Pleistocene’, Cambridge Archaeological Journal 13:1 (2003), 41–66.
2. Timothy F. Flannery, The Future Eaters: An Ecological History of the Australasian Lands and Peoples (Port Melbourne: Reed Books Australia, 1994); Anthony D. Barnosky et al., ‘Assessing the Causes of Late Pleistocene Extinctions on the Continents’, Science 306:5693 (2004): 70–5; Barry W. Brook and David M. J. S. Bowman, ‘The Uncertain Blitzkrieg of Pleistocene Megafauna’, Journal of Biogeography 31:4 (2004), 517–23; Gifford H. Miller et al., ‘Ecosystem Collapse in Pleistocene Australia and a Human Role in Megafaunal Extinction’, Science 309:5732 (2005), 287–90; Richard G. Roberts et al., ‘New Ages for the Last Australian Megafauna: Continent Wide Extinction about 46,000 Years Ago’, Science 292:5523 (2001), 1,888–92.
3. Stephen Wroe and Judith Field, ‘A Review of Evidence for a Human Role in the Extinction of Australian Megafauna and an Alternative Explanation’, Quaternary Science Reviews 25:21–2 (2006), 2,692–703; Barry W. Brook et al., ‘Would the Australian Megafauna Have Become Extinct if Humans Had Never Colonised the Continent? Comments on ‘‘A Review of the Evidence for a Human Role in the Extinction of Australian Megafauna and an Alternative Explanation” by S. Wroe and J. Field’, Quaternary Science Reviews 26:3–4 (2007), 560–4; Chris S. M. Turney et al., ‘Late-Surviving Megafauna in Tasmania, Australia, Implicate Human Involvement in their Extinction’, Proceedings of the National Academy of Sciences 105:34 (2008), 12,150–3.
4. John Alroy, ‘A Multispecies Overkill Simulation of the End-Pleistocene Megafaunal Mass Extinction’, Science, 292:5523 (2001), 1,893–6; O’Connel and Allen, ‘Pre-LGM Sahul’, 400–1.
5. L. H. Keeley, ‘Proto-Agricultural Practices Among Hunter-Gatherers: A Cross-Cultural Survey’, in Last Hunters, First Farmers: New Perspectives on the Prehistoric Transition to Agriculture, ed. T. Douglas Price and Anne Birgitte Gebauer (Santa Fe: School of American Research Press, 1995), 243–72; R. Jones, ‘Firestick Farming’, Australian Natural History 16 (1969), 224–8.
6. David J. Meltzer, First Peoples in a New World: Colonizing Ice Age America (Berkeley: University of California Press, 2009).
7. Paul L. Koch and Anthony D. Barnosky, ‘Late Quaternary Extinctions: State of the Debate’, Annual Review of Ecology, Evolution, and Systematics 37 (2006), 215–50; Anthony D. Barnosky et al., ‘Assessing the Causes of Late Pleistocene Extinctions on the Continents’, 70–5.
5 History’s Biggest Fraud
1. The map is based mainly on: Peter Bellwood, First Farmers: The Origins of Agricultural Societies (Malden: Blackwell Publishing, 2005).
2. Jared Diamond, Guns, Germs, and Steel: The Fates of Human Societies (New York: W. W. Norton, 1997).
3. Gat, War in Human Civilization, 130–1; Robert S. Walker and Drew H. Bailey, ‘Body Counts in Lowland South American Violence’, Evolution and Human Behavior 34 (2013), 29–34.
4. Katherine A. Spielmann, ‘A Review: Dietary Restriction on Hunter-Gatherer Women and the Implications for Fertility and Infant Mortality’, Human Ecology 17:3 (1989), 321–45. See also: Bruce Winterhalder and Eric Alder Smith, ‘Analyzing Adaptive Strategies: Human Behavioral Ecology at Twenty-Five’, Evolutionary Anthropology 9:2 (2000), 51–72.
5. Alain Bideau, Bertrand Desjardins and Hector Perez-Brignoli (eds.), Infant and Child Mortality in the Past (Oxford: Clarendon Press, 1997); Edward Anthony Wrigley et al., English Population History from Family Reconstitution, 1580–1837 (Cambridge: Cambridge University Press, 1997), 295–6, 303.
6. Manfred Heun et al., ‘Site of Einkorn Wheat Domestication Identified by DNA Fingerprints’, Science 278:5341 (1997), 1,312–14.
7. Charles Patterson, Eternal Treblinka: Our Treatment of Animals and the Holocaust (New York: Lantern Books, 2002), 9–10; Peter J. Ucko and G. W. Dimbleby (eds.), The Domestication and Exploitation of Plants and Animals (London: Duckworth, 1969), 259.
8. Avi Pinkas (ed.), Farmyard Animals in Israel – Research, Humanism and Activity (Rishon Le-Ziyyon: The Association for Farmyard Animals, 2009 [Hebrew]), 169–99; ‘Milk Production – the Cow’ [Hebrew], The Dairy Council, accessed 22 March 2012, http://www.milk.org.il/cgi-webaxy/sal/sal.pl?lang=he&ID=645657_milk&act=show&dbid=katavot&dataid=cow.htm.
9. Edward Evan Evans-Pritchard, The Nuer: A Description of the Modes of Livelihood and Political Institutions of a Nilotic People (Oxford: Oxford University Press, 1969); E. C. Amoroso and P. A. Jewell, ‘The Exploitation of the Milk-Ejection Reflex by Primitive People’, in Man and Cattle: Proceedings of the Symposium on Domestication at the Royal Anthropological Institute, 24–26 May 1960, ed. A. E. Mourant and F. E. Zeuner (London: The Royal Anthropological Institute, 1963), 129–34.
10. Johannes Nicolaisen, Ecology and Culture of the Pastoral Tuareg (Copenhagen: National Museum, 1963), 63.
6 Building Pyramids
1. Angus Maddison, The World Economy, vol. 2 (Paris: Development Centre of the Organization of Economic Co-operation and Development, 2006), 636; ‘Historical Estimates of World Population’, U.S. Census Bureau, accessed 10 December 2010, http://www.census.gov/ipc/www/worldhis.html.
2. Robert B. Mark, The Origins of the Modern World: A Global and Ecological Narrative (Lanham, MD: Rowman & Littlefield Publishers, 2002), 24.
3. Raymond Westbrook, ‘Old Babylonian Period’, in A History of Ancient Near Eastern Law, vol. 1, ed. Raymond Westbrook (Leiden: Brill, 2003), 361–430; Martha T. Roth, Law Collections from Mesopotamia and Asia Minor, 2nd edn (Atlanta: Scholars Press, 1997), 71–142; M. E. J. Richardson, Hammurabi’s Laws: Text, Translation and Glossary (London: T & T Clark International, 2000).
4. Roth, Law Collections from Mesopotamia, 76.
5. Ibid., 121.
6. Ibid., 122–3.
7. Ibid., 133–3.
8. Constance Brittaine Bouchard, Strong of Body, Brave and Noble: Chivalry and Society in Medieval France (New York: Cornell University Press, 1998), 99; Mary Martin McLaughlin, ‘Survivors and Surrogates: Children and Parents from the Ninth to Thirteenth Centuries’, in Medieval Families: Perspectives on Marriage, Household and Children, ed. Carol Neel (Toronto: University of Toronto Press, 2004), 81 n.; Lise E. Hull, Britain’s Medieval Castles (Westport: Praeger, 2006), 144.
7 Memory Overload
1. Andrew Robinson, The Story of Writing (New York: Thames and Hudson, 1995), 63; Hans J. Nissen, Peter Damerow and Robert K. Englung, Archaic Bookkeeping: Writing and Techniques of Economic Administration in the Ancient Near East (Chicago, London: The University of Chicago Press, 1993), 36.
2. Marcia and Robert Ascher, Mathematics of the Incas – Code of the Quipu (New York: Dover Publications, 1981).
3. Gary Urton, Signs of the Inka Khipu (Austin: University of Texas Press, 2003); Galen Brokaw, A History of the Khipu (Cambridge: Cambridge University Press, 2010).
4. Stephen D. Houston (ed.), The First Writing: Script Invention as History and Process (Cambridge: Cambridge University Press, 2004), 222.
8 There is No Justice in History
1. Sheldon Pollock, ‘Axialism and Empire’, in Axial Civilizations and World History, ed. Johann P. Arnason, S. N. Eisenstadt and Björn Wittrock (Leiden: Brill, 2005), 397–451.
2. Harold M. Tanner, China: A History (Indianapolis: Hackett Pub. Co., 2009), 34.
3. Ramesh Chandra, Identity and Genesis of Caste System in India (Delhi: Kalpaz Publications, 2005); Michael Bamshad et al., ‘Genetic Evidence on the Origins of Indian Caste Population’, Genome Research 11 (2001): 904–1,004; Susan Bayly, Caste, Society and Politics in India from the Eighteenth Century to the Modern Age (Cambridge: Cambridge University Press, 1999).
4. Houston, First Writing, 196.
5. The secretary general, United Nations, Report of the Secretary General on the In-depth Study on All Forms of Violence Against Women, delivered to the General Assembly, UN Doc. A/16/122/Add.1 (6 July 2006), 89.
6. Sue Blundell, Women in Ancient Greece (Cambridge, Mass.: Harvard University Press, 1995), 113–29, 132–3.
10 The Scent of Money
1. Francisco López de Gómara, Historia de la Conquista de Mexico, vol. 1, ed. D. Joaquin Ramirez Cabañes (Mexico City: Editorial Pedro Robredo, 1943), 106.
2. Andrew M. Watson, ‘Back to Gold – and Silver’, Economic History Review 20:1 (1967), 11–12; Jasim Alubudi, Repertorio Bibliográfico del Islam (Madrid: Vision Libros, 2003), 194.
3. Watson, ‘Back to Gold – and Silver’, 17–18.
4. David Graeber, Debt: The First 5,000 Years (Brooklyn, NY: Melville House, 2011).
5. Glyn Davies, A History of Money: From Ancient Times to the Present Day (Cardiff: University of Wales Press, 1994), 15.
6. Szymon Laks, Music of Another World, trans. Chester A. Kisiel (Evanston, Ill.: North-western University Press, 1989), 88–9. The Auschwitz ‘market’ was restricted to certain classes of prisoners and conditions changed dramatically across time.
7. See also Niall Ferguson, The Ascent of Money (New York: The Penguin Press, 2008), 4.
8. For information on barley money I have relied on an unpublished PhD thesis: Refael Benvenisti, ‘Economic Institutions of Ancient Assyrian Trade in the Twentieth to Eighteenth Centuries BC’ (Hebrew University of Jerusalem, unpublished PhD thesis, 2011). See also Norman Yoffee, ‘The Economy of Ancient Western Asia’, in Civilizations of the Ancient Near East, vol. 1, ed. J. M. Sasson (New York: C. Scribner’s Sons, 1995), 1,387–99; R. K. Englund, ‘Proto-Cuneiform Account-Books and Journals’, in Creating Economic Order: Record-keeping, Standardization and the Development of Accounting in the Ancient Near East, ed. Michael Hudson and Cornelia Wunsch (Bethesda, Md.: CDL Press, 2004), 21–46; Marvin A. Powell, ‘A Contribution to the History of Money in Mesopotamia Prior to the Invention of Coinage’, in Festschrift Lubor Matouš, ed. B. Hruška and G. Komoróczy (Budapest: Eötvös Loránd Tudományegyetem, 1978), 211–43; Marvin A. Powell, ‘Money in Mesopotamia’, Journal of the Economic and Social History of the Orient 39:3 (1996), 224–42; John F. Robertson, ‘The Social and Economic Organization of Ancient Mesopotamian Temples’, in Civilizations of the Ancient Near East, vol. 1, ed. Sasson, 443–500; M. Silver, ‘Modern Ancients’, in Commerce and Monetary Systems in the Ancient World: Means of Transmission and Cultural Interaction, ed. R. Rollinger and U. Christoph (Stuttgart: Steiner, 2004), 65–87; Daniel C. Snell, ‘Methods of Exchange and Coinage in Ancient Western Asia’, in Civilizations of the Ancient Near East, vol. 1, ed. Sasson, 1,487–97.
11 Imperial Visions
1. Nahum Megged, The Aztecs (Tel Aviv: Dvir, 1999 [Hebrew]), 103.
2. Tacitus, Agricola, ch. 30 (Cambridge, Mass.: Harvard University Press, 1958), 220–1.
3. A. Fienup-Riordan, The Nelson Island Eskimo: Social Structure and Ritual Distribution (Anchorage: Alaska Pacific University Press, 1983), 10.
4. Yuri Pines, ‘Nation States, Globalization and a United Empire – the Chinese Experience (third to fifth centuries BC)’, Historia 15 (1995), 54 [Hebrew].
5. Alexander Yakobson, ‘Us and Them: Empire, Memory and Identity in Claudius’ Speech on Bringing Gauls into the Roman Senate’, in On Memory: An Interdisciplinary Approach, ed. Doron Mendels (Oxford: Peter Land, 2007), 23–4.
12 The Law of Religion
1. W. H. C. Frend, Martyrdom and Persecution in the Early Church (Cambridge: James Clarke & Co., 2008), 536–7.
2. Robert Jean Knecht, The Rise and Fall of Renaissance France, 1483–1610 (London: Fontana Press, 1996), 424.
3. Marie Harm and Hermann Wiehle, Lebenskunde fuer Mittelschulen – Fuenfter Teil. Klasse 5 fuer Jungen (Halle: Hermann Schroedel Verlag, 1942), 152–7.
13 The Secret of Success
1. Susan Blackmore, The Meme Machine (Oxford: Oxford University Press, 1999).
14 The Discovery of Ignorance
1. David Christian, Maps of Time: An Introduction to Big History (Berkeley: University of California Press, 2004), 344–5; Angus Maddison, The World Economy, vol. 2 (Paris: Development Centre of the Organization of Economic Co-operation and Development, 2001), 636; ‘Historical Estimates of World Population’, US Census Bureau, accessed 10 December 2010, http://www.census.gov/ipc/www/worldhis.html.
2. Maddison, The World Economy, vol. 1, 261.
3. ‘Gross Domestic Product 2009’, the World Bank, Data and Statistics, accessed 10 December 2010, http://siteresources.worldbank.org/DATASTATISTICS/Resources/GDP.pdf.
4. Christian, Maps of Time, 141.
5. The largest contemporary cargo ship can carry about 100,000 tons. In 1470 all the world’s fleets could together carry no more than 320,000 tons. By 1570 total global tonnage was up to 730,000 tons (Maddison, The World Economy, vol. 1, 97).
6. The world’s largest bank – the Royal Bank of Scotland – has reported in 2007 deposits worth $1.3 trillion. That’s five times the annual global production in 1500. See ‘Annual Report and Accounts 2008’, the Royal Bank of Scotland, 35, accessed 10 December 2010, http://files.shareholder.com/downloads/RBS/626570033x0x278481/eb7a003a-5c9b-41ef-bad3–81fb98a6c823/RBS_GRA_2008_09_03_09.pdf.
7. Ferguson, Ascent of Money, 185–98.
8. Maddison, The World Economy, vol. 1, 31; Wrigley, English Population History, 295; Christian, Maps of Time, 450, 452; ‘World Health Statistic Report 2009’, 35–45, World Health Organization, accessed 10 December 2010 http://www.who.int/whosis/whostat/EN_WHS09_Full.pdf.
9. Wrigley, English Population History, 296.
10. ‘England, Interim Life Tables, 1980–82 to 2007–09’, Office for National Statistics, accessed 22 March 2012 http://www.ons.gov.uk/ons/publications/re-reference-tables.html?edition=tcm%3A77–61850.
11. Michael Prestwich, Edward I (Berkeley: University of California Press, 1988), 125–6.
12. Jennie B. Dorman et al., ‘The age-1 and daf-2 Genes Function in a Common Pathway to Control the Lifespan of Caenorhabditis elegans’, Genetics 141:4 (1995), 1,399–406; Koen Houthoofd et al., ‘Life Extension via Dietary Restriction is Independent of the Ins/IGF-1 Signalling Pathway in Caenorhabditis elegans’, Experimental Gerontology 38:9 (2003), 947–54.
13. Shawn M. Douglas, Ido Bachelet and George M. Church, ‘A Logic-Gated Nanorobot for Targeted Transport of Molecular Payloads’, Science 335:6070 (2012): 831–4; Dan Peer et al., ‘Nanocarriers As An Emerging Platform for Cancer Therapy’, Nature Nanotechnology 2 (2007): 751–60; Dan Peer et al., ‘Systemic Leukocyte-Directed siRNA Delivery Revealing Cyclin D1 as an Anti-Inflammatory Target’, Science 319:5863 (2008): 627–30.
15 The Marriage of Science and Empire
1. Stephen R. Bown, Scurvy: How a Surgeon, a Mariner and a Gentleman Solved the Greatest Medical Mystery of the Age of Sail (New York: Thomas Dunne Books, St. Martin’s Press, 2004); Kenneth John Carpenter, The History of Scurvy and Vitamin C (Cambridge: Cambridge University Press, 1986).
2. James Cook, The Explorations of Captain James Cook in the Pacific, as Told by Selections of his Own Journals 1768–1779, ed. Archibald Grenfell Price (New York: Dover Publications, 1971), 16–17; Gananath Obeyesekere, The Apotheosis of Captain Cook: European Mythmaking in the Pacific (Princeton: Princeton University Press, 1992), 5; J. C. Beaglehole, ed., The Journals of Captain James Cook on His Voyages of Discovery, vol. 1 (Cambridge: Cambridge University Press, 1968), 588.
3. Mark, Origins of the Modern World, 81.
4. Christian, Maps of Time, 436.
5. John Darwin, After Tamerlane: The Global History of Empire Since 1405 (London: Allen Lane, 2007), 239.
6. Soli Shahvar, ‘Railroads i. The First Railroad Built and Operated in Persia’, in the Online Edition of Encyclopaedia Iranica, last modified 7 April 2008, http://www.iranicaonline.org/articles/railroads-i; Charles Issawi, ‘The Iranian Economy 1925–1975: Fifty Years of Economic Development’, in Iran under the Pahlavis, ed. George Lenczowski (Stanford: Hoover Institution Press, 1978), 156.
7. Mark, Origins of the Modern World, 46.
8. Kirkpatrick Sale, Christopher Columbus and the Conquest of Paradise (London: Tauris Parke Paperbacks, 2006), 7–13.
9. Edward M. Spiers, The Army and Society: 1815–1914 (London: Longman, 1980), 121; Robin Moore, ‘Imperial India, 1858–1914’, in The Oxford History of the British Empire: The Nineteenth Century, vol. 3, ed. Andrew Porter (New York: Oxford University Press, 1999), 442.
10. Vinita Damodaran, ‘Famine in Bengal: A Comparison of the 1770 Famine in Bengal and the 1897 Famine in Chotanagpur’, The Medieval History Journal 10:1–2 (2007), 151.
16 The Capitalist Creed
1. Maddison, World Economy, vol. 1, 261, 264; ‘Gross National Income Per Capita 2009, Atlas Method and PPP’, the World Bank, accessed 10 December 2010, http://siteresources.worldbank.org/DATASTATISTICS/Resources/GNIPC.pdf.
2. The mathematics of my bakery example are not as accurate as they could be. Since banks are allowed to loan $10 for every dollar they keep in their possession, of every million dollars deposited in the bank, the bank can loan out to entrepreneurs only about $909,000 while keeping $91,000 in its vaults. But to make life easier for the readers I preferred to work with round numbers. Besides, banks do not always follow the rules.
3. Carl Trocki, Opium, Empire and the Global Political Economy (New York: Routledge, 1999), 91.
4. Georges Nzongola-Ntalaja, The Congo from Leopold to Kabila: A People’s History (London: Zed Books, 2002), 22.
17 The Wheels of Industry
1. Mark, Origins of the Modern World, 109.
2. Nathan S. Lewis and Daniel G. Nocera, ‘Powering the Planet: Chemical Challenges in Solar Energy Utilization’, Proceedings of the National Academy of Sciences 103:43 (2006), 15,731.
3. Kazuhisa Miyamoto (ed.), ‘Renewable Biological Systems for Alternative Sustainable Energy Production’, FAO Agricultural Services Bulletin 128 (Osaka: Osaka University, 1997), Chapter 2.1.1, accessed 10 December 2010, http://www.fao.org/docrep/W7241E/w7241e06.htm#2.1.1percent20solarpercent20energy; James Barber, ‘Biological Solar Energy’, Philosophical Transactions of the Royal Society A 365:1853 (2007), 1007.
4. ‘International Energy Outlook 2010’, US Energy Information Administration, 9, accessed 10 December 2010, http://www.eia.doe.gov/oiaf/ieo/pdf/0484(2010).pdf.
5. S. Venetsky, ‘“Silver” from Clay’, Metallurgist 13:7 (1969), 451; Fred Aftalion, A History of the International Chemical Industry (Philadelphia: University of Pennsylvania Press, 1991), 64; A. J. Downs, Chemistry of Aluminium, Gallium, Indium and Thallium (Glasgow: Blackie Academic & Professional, 1993), 15.
6. Jan Willem Erisman et al., ‘How a Century of Ammonia Synthesis Changed the World’, Nature Geoscience 1 (2008), 637.
7. G. J. Benson and B. E. Rollin (eds.), The Well-being of Farm Animals: Challenges and Solutions (Ames, IA: Blackwell, 2004); M. C. Appleby, J. A. Mench and B. O. Hughes, Poultry Behaviour and Welfare (Wallingford: CABI Publishing, 2004); J. Webster, Animal Welfare: Limping Towards Eden (Oxford: Blackwell Publishing, 2005); C. Druce and P. Lymbery, Outlawed in Europe: How America is Falling Behind Europe in Farm Animal Welfare (New York: Archimedean Press, 2002).
8. Harry Harlow and Robert Zimmermann, ‘Affectional Responses in the Infant Monkey’, Science 130:3373 (1959), 421–32; Harry Harlow, ‘The Nature of Love’, American Psychologist 13 (1958), 673–85; Laurens D. Young et al., ‘Early stress and later response to separation in rhesus monkeys’, American Journal of Psychiatry 130:4 (1973), 400–5; K. D. Broad, J. P. Curley and E. B. Keverne, ‘Mother-infant bonding and the evolution of mammalian social relationships’, Philosophical Transactions of the Royal Society B 361:1476 (2006), 2,199–214; Florent Pittet et al., ‘Effects of maternal experience on fearfulness and maternal behaviour in a precocial bird’, Animal Behaviour (March 2013), In Press – available online at: http://www.sciencedirect.com/science/article/pii/S0003347213000547).
9. ‘National Institute of Food and Agriculture’, United States Department of Agriculture, accessed 10 December 2010, http://www.csrees.usda.gov/qlinks/extension.html.
18 A Permanent Revolution
1. Vaclav Smil, The Earth’s Biosphere: Evolution, Dynamics and Change (Cambridge, Mass.: MIT Press, 2002); Sarah Catherine Walpole et al., ‘The Weight of Nations: An Estimation of Adult Human Biomass’, BMC Public Health 12:439 (2012), http://www.biomedcentral.com/1471–2458/12/439.
2. William T. Jackman, The Development of Transportation in Modern England (London: Frank Cass & Co., 1966), 324–7; H. J. Dyos and D. H. Aldcroft, British Transport – An Economic Survey From the Seventeenth Century to the Twentieth (Leicester: Leicester University Press, 1969), 124–31; Wolfgang Schivelbusch, The Railway Journey: The Industrialization of Time and Space in the 19th Century (Berkeley: University of California Press, 1986).
3. For a detailed discussion of the unprecedented peacefulness of the last few decades, see in particular Steven Pinker, The Better Angels of Our Nature: Why Violence Has Declined (New York: Viking, 2011); Joshua S. Goldstein, Winning the War on War: The Decline of Armed Conflict Worldwide (New York: Dutton, 2011); Gat, War in Human Civilization.
4. ‘World Report on Violence and Health: Summary, Geneva 2002’, World Health Organization, accessed 10 December 2010, http://www.who.int/whr/2001/en/whr01_annex_en.pdf. For mortality rates in previous eras see: Lawrence H. Keeley, War before Civilization: The Myth of the Peaceful Savage (New York: Oxford University Press, 1996).
5. ‘World Health Report, 2004’, World Health Organization, 124, accessed 10 December 2010, http://www.who.int/whr/2004/en/report04_en.pdf.
6. Raymond C. Kelly, Warless Societies and the Origin of War (Ann Arbor: University of Michigan Press, 2000), 21. See also Gat, War in Human Civilization, 129–31; Keeley, War before Civilization.
7. Manuel Eisner, ‘Modernization, Self-Control and Lethal Violence’, British Journal of Criminology 41:4 (2001), 618–638; Manuel Eisner, ‘Long-Term Historical Trends in Violent Crime’, Crime and Justice: A Review of Research 30 (2003), 83–142; ‘World Report on Violence and Health: Summary, Geneva 2002’, World Health Organization, accessed 10 December 2010, http://www.who.int/whr/2001/en/whr01_annex_en.pdf; ‘World Health Report, 2004’, World Health Organization, 124, accessed 10 December 2010, http://www.who.int/whr/2004/en/report04_en.pdf.
8. Walker and Bailey, ‘Body Counts in Lowland South American Violence’, 30.
19 And They Lived Happily Ever After
1. For both the psychology and biochemistry of happiness, the following are good starting points: Jonathan Haidt, The Happiness Hypothesis:Finding Modern Truth in Ancient Wisdom (New York: Basic Books, 2006); R. Wright, The Moral Animal: Evolutionary Psychology and Everyday Life (New York: Vintage Books, 1994); M. Csikszentmihalyi, ‘If We Are So Rich, Why Aren’t We Happy?’, American Psychologist 54:10 (1999): 821–7; F. A. Huppert, N. Baylis and B. Keverne (eds.), The Science of Well-Being (Oxford: Oxford University Press, 2005); Michael Argyle, The Psychology of Happiness, 2nd edition (New York: Routledge, 2001); Ed Diener (ed.), Assessing Well-Being: The Collected Works of Ed Diener (New York: Springer, 2009); Michael Eid and Randy J. Larsen (eds.), The Science of Subjective Well-Being (New York: Guilford Press, 2008); Richard A. Easterlin (ed.), Happiness in Economics (Cheltenham: Edward Elgar Publishing, 2002); Richard Layard, Happiness: Lessons from a New Science (New York: Penguin, 2005).
2. Daniel Kahneman, Thinking, Fast and Slow (New York: Farrar, Straus and Giroux, 2011); Inglehart et al., ‘Development, Freedom and Rising Happiness’, 278–81.
3. D. M. McMahon, The Pursuit of Happiness: A History from the Greeks to the Present (London: Allen Lane, 2006).
20 The End of Homo Sapiens
1. Keith T. Paige et al., ‘De Novo Cartilage Generation Using Calcium Alginate-Chondrocyte Constructs’, Plastic and Reconstructive Surgery 97:1 (1996), 168–78.
2. David Biello, ‘Bacteria Transformed into Biofuels Refineries’, Scientific American, 27 January 2010, accessed 10 December 2010, http://www.scientificamerican.com/article.cfm?id=bacteria-transformed-into-biofuel-refineries.
3. Gary Walsh, ‘Therapeutic Insulins and Their Large-Scale Manufacture’, Applied Micro-biology and Biotechnology 67:2 (2005), 151–9.
4. James G. Wallis et al., ‘Expression of a Synthetic Antifreeze Protein in Potato Reduces Electrolyte Release at Freezing Temperatures’, Plant Molecular Biology 35:3 (1997), 323–30.
5. Robert J. Wall et al., ‘Genetically Enhanced Cows Resist Intramammary Staphylococcus Aureus Infection’, Nature Biotechnology 23:4 (2005), 445–51.
6. Liangxue Lai et al., ‘Generation of Cloned Transgenic Pigs Rich in Omega-3 Fatty Acids’, Nature Biotechnology 24:4 (2006), 435–6.
7. Ya-Ping Tang et al., ‘Genetic Enhancement of Learning and Memory in Mice’, Nature 401 (1999), 63–9.
8. Zoe R. Donaldson and Larry J. Young, ‘Oxytocin, Vasopressin and the Neurogenetics of Sociality’, Science 322:5903 (2008), 900–904; Zoe R. Donaldson, ‘Production of Germline Transgenic Prairie Voles (Microtus Ochrogaster) Using Lentiviral Vectors’, Biology of Reproduction 81:6 (2009), 1,189–95.
9. Terri Pous, ‘Siberian Discovery Could Bring Scientists Closer to Cloning Woolly Mammoth’, Time, 17 September 2012, accessed 19 February 2013; Pasqualino Loi et al, ‘Biological time machines: a realistic approach for cloning an extinct mammal’, Endangered Species Research 14 (2011), 227–33; Leon Huynen, Craig D. Millar and David M. Lambert, ‘Resurrecting ancient animal genomes: The extinct moa and more’, Bioessays 34 (2012), 661–9.
10. Nicholas Wade, ‘Scientists in Germany Draft Neanderthal Genome’, New York Times, 12 February 2009, accessed 10 December 2010, http://www.nytimes.com/2009/02/13/science/13neanderthal.html?_r=2&ref=science; Zack Zorich, ‘Should We Clone Neanderthals?’, Archaeology 63:2 (2009), accessed 10 December 2010, http://www.archaeology.org/1003/etc/neanderthals.html.
11. Robert H. Waterston et al., ‘Initial Sequencing and Comparative Analysis of the Mouse Genome’, Nature 420:6915 (2002), 520.
12. ‘Hybrid Insect Micro Electromechanical Systems (HI-MEMS)’, Microsystems Technology Office, DARPA, accessed 22 March 2012, http://www.darpa.mil/Our_Work/MTO/Programs/Hybrid_Insect_Micro_Electromechanical_Systems_percent28HI-MEMSpercent29.aspx. See also: Sally Adee, ‘Nuclear-Powered Transponder for Cyborg Insect’, IEEE Spectrum, December 2009, accessed 10 December 2010, http://spectrum.ieee.org/semiconductors/devices/nuclearpowered-transponder-for-cyborg-insect?utm_source=feedburner&utm_medium=feed&utm_campaign=Feedpercent3A+IeeeSpectrum+percent28IEEE+Spectrumpercent29&utm_content=Google+Reader; Jessica Marshall, ‘The Fly Who Bugged Me’, New Scientist 197:2646 (2008), 40–3; Emily Singer, ‘Send in the Rescue Rats’, New Scientist 183:2466 (2004), 21–2; Susan Brown, ‘Stealth Sharks to Patrol the High Seas’, New Scientist 189:2541 (2006), 30–1.
13. Bill Christensen, ‘Military Plans Cyborg Sharks’, Live Science, 7 March 2006, accessed 10 December 2010, http://www.livescience.com/technology/060307_shark_implant.html.
14. ‘Cochlear Implants’, National Institute on Deafness and Other Communication Disorders, accessed 22 March 2012, http://www.nidcd.nih.gov/health/hearing/pages/coch.aspx.
15. Retina Implant, http://www.retina-implant.de/en/doctors/technology/default.aspx.
16. David Brown, ‘For 1st Woman With Bionic Arm, a New Life is Within Reach’, Washington Post, 14 September 2006, accessed 10 December 2010, http://www.washingtonpost.com/wp-dyn/content/article/2006/09/13/AR2006091302271.html?nav=E8.
17. Miguel Nicolelis, Beyond Boundaries: The New Neuroscience of Connecting Brains and Machines – and How it Will Change Our Lives (New York: Times Books, 2011).
18. Chris Berdik, ‘Turning Thought into Words’, BU Today, 15 October 2008, accessed 22 March 2012, http://www.bu.edu/today/2008/turning-thoughts-into-words/.
19. Jonathan Fildes, ‘Artificial Brain “10 years away”’, BBC News, 22 July 2009, accessed 19 September 2012, http://news.bbc.co.uk/2/hi/8164060.stm.
20. Radoje Drmanac et al., ‘Human Genome Sequencing Using Unchained Base Reads on Self-Assembling DNA Nanoarrays’, Science 327:5961 (2010), 78–81; ‘Complete Genomics’ website: http://www.completegenomics.com/; Rob Waters, ‘Complete Genomics Gets Gene Sequencing under $5000 (Update 1)’, Bloomberg, 5 November 2009, accessed 10 December 2010; http://www.bloomberg.com/apps/news?pid=newsarchive&sid=aWutnyE4SoWw; Fergus Walsh, ‘Era of Personalized Medicine Awaits’, BBC News, last updated 8 April 2009, accessed 22 March 2012, http://news.bbc.co.uk/2/hi/health/7954968.stm; Leena Rao, ‘PayPal Co-Founder and Founders Fund Partner Joins DNA Sequencing Firm Halcyon Molecular’, TechCrunch, 24 September 2009, accessed 10 December 2010, http://techcrunch.com/2009/09/24/paypal-co-founder-and-founders-fund-partner-joins-dna-sequencing-firm-halcyon-molecular/.
The pagination of this electronic edition does not match the edition from which it was created. To locate a specific entry, please use your e-book reader’s search tools.
Page numbers in italics indicate images.
Abbasid caliphate 199, 364
Aboriginal Australians 16, 25, 44, 59, 234, 277, 281, 301, 378
Achaemenid Persian Empire 220–1
Aché people 52–3
Aemilianus, Scipio 188, 189, 263
Afghanistan 169, 262, 314, 366, 369, 371
Africa viii, 4, 5, 6, 8, 13–19, 15, 20, 21, 44, 48, 64, 65, 67, 69, 71, 72, 77, 78, 98, 111, 135, 135, 140, 156, 167, 173, 174, 178, 194, 200, 201, 202, 203, 209, 214, 218, 222, 241, 275, 279, 280, 281, 284, 287, 288, 290, 291, 292, 296, 318, 330–1, 332, 333, 343, 371, 376, 378, 415
Afro-Asian World 63, 64, 67, 72, 92, 153, 167, 168, 169, 170, 173, 184, 218, 223, 244, 263, 286
Agricultural Revolution, The viii, 3, 39, 42, 44, 46, 47, 48, 51, 58, 59, 72, 74, 75–159, 174–5, 211, 212, 333, 341, 355, 377, 398
Ahura Mazda 221
Akhenaten, Pharaoh 217
Akkadian Empire of Sargon the Great ix, 103, 129, 129n, 194, 195
Alabama 141–2, 154
Alamogordo, first atomic bomb detonated at, 1945 245, 249, 274
Alaska 69, 70, 78, 194, 196, 296
Alba (green fluorescent rabbit) 398–9
Aldrin, Buzz 285
Alexander the Great 112, 146, 157, 196, 290
Algeria 156, 297, 369, 370, 371, 377
‘alpha male’ 25–6, 33, 34, 35, 115, 155, 171
Altamira, cave art of 100
Alyattes of Lydia, King 182
Amazon 62, 70, 368
America viii, ix, 30, 59, 63, 64, 67, 69–72, 77, 78, 98, 168, 170, 184, 198, 279, 284, 286–8, 289, 291–6, 304, 316–17, 325, 330 see also United States
American Indians/Native Americans 71, 133, 151, 170, 171, 283, 285–6, 378
Anatolia 103, 182
Andean World 168, 196
Angra Mainyu 221
animals:
biological engineering of 398–402, 400
cruelty to 91–7, 94, 96, 341–6, 415
domestication of viii, 45–6, 47, 51, 77–8, 91–7, 94, 96
extinction of viii, ix, 65–74, 97, 305, 350, 351
industrial agriculture and 341–6, 343, 350, 379
Animism 54–5, 211–13, 218, 223
Apollo II 64, 285, 287, 412
Arab Empire 130, 194, 199, 201, 202, 203, 239, 241, 252, 262, 283, 284
Arab Spring, 2011 240
Arabian peninsula 14, 218
Arabic numerals 130
Arctic 36, 59, 67, 69, 70, 73, 317, 401
Argentina 57, 70, 126, 168, 170, 371
Aristotle 134, 136
Armenians 192, 365
arms race 243
Armstrong, Neil 285, 304, 376
Arthur, King 114, 164
Aryan race 138, 139, 140, 232–6, 302–3
Asia 6, 8, 14, 15, 21, 63, 65, 67, 71, 77, 140, 166, 167, 169, 170, 178, 184, 194, 209, 215, 218, 221, 222, 227, 279–80, 281, 282, 287, 288, 296, 299, 302, 315–16, 317, 318, 321, 369, 370 see also Afro-Asia
Assyrian Empire 103, 153, 192, 194, 195, 354
Atahualpa 295–6
Athens, ancient 146, 149, 152, 190, 191, 290, 371
Atman 214
atomic bomb 245, 245, 249, 261–2, 274, 338, 372 see also nuclear physics
Augustine, St 193, 393
Augustus, Emperor 157
Aurelius, Emperor Marcus 200
Australia viii, 16, 21, 25, 44, 48, 59, 62, 63, 64–9, 72, 78, 98, 168, 234, 276–8, 281, 301, 304, 378,
Australian World 168
Australopithecus 5–6
Aztec Empire 55, 153, 168, 173, 190–1, 215, 219, 284, 291, 292–5, 293, 374
Babylon 105–6, 108, 115, 116, 376
Babylonian Empire 103, 104, 105–7, 108, 111, 115, 116, 120, 193, 194, 195, 298, 299, 364, 376
Bacon, Francis 259
Banks, Joseph 276, 278, 301
barbarians 171–2
Barí Indians 41
Battuta, Ibn 169
Beagle, HMS 284–5
bees 22, 25, 119–20, 171, 398
Behistun Inscription 298
Berbers 201, 202, 203
Bernoulli, Jacob 256–7
Bible 127, 145, 182, 223, 251, 252, 255, 266, 285, 287
Big Bang 3, 252, 411
Bin Laden, Osama 172, 262
binary script 131, 132
bio-dictatorships 401
biofuel 249, 401
biology:
birth of viii, 3
biological determinism 146–7
biological engineering 398–402, 400, 403
equality and 109–10
gender and 146–50, 152, 153–4
happiness and 380, 385–90, 391, 394, 395
history of 37–9
race and 134, 135, 136, 139, 140–1, 144, 145, 146, 232, 235, 236, 302–4
bionic arms 405–6, 406
biotechnology 315
bonobos 33, 41, 56, 158
Brahmins 135–6, 137, 143, 144
brains 8–9, 10, 11, 12–13, 14, 20–1, 29, 40, 49, 78, 119–22, 127, 129, 131, 252, 262, 389, 403, 407, 409
British East India Company 205, 325, 331–2
British Empire 190, 192, 198, 199–200, 204–6, 205, 278–9, 297–302, 324–6, 368–70
Buddhism ix, 10, 34, 127, 172, 198, 210, 223, 224–7, 225, 228, 229, 230, 238, 251, 349, 394–5, 396
Buka 64
Byron, Lord 326
Byzantines 239, 262
Caesar, Julius 157, 170
Caledonian tribes 193–4
Calgacus 193–4
California 372–3, 373
Caligula, Emperor 95–6
capitalism ix, 112, 134, 168, 169, 198, 203, 208, 230, 240, 250, 254, 264, 274, 282, 283, 304, 305–33, 334, 347–9, 373–4, 377 see also money
Caribbean Islands 71–2, 291, 292, 295 see also under individual island name
Carthage 188, 190, 263, 290
Çatalhöyük, Anatolia 103
Catholic Church 27, 31, 34, 35, 137, 154, 156, 174, 179, 189, 216, 220, 318
Celts 188, 189, 199, 220, 300, 302
Central America 70, 78, 88, 126, 168, 196, 292
Cervantes, Miguel de: The Siege of Numantia 189
Chak Tok Ich’aak of Tikal, King 167
chaotic systems 240
Chauvet-Pont-d’Arc Cave, France 1, 100, 123, 376
chemistry, beginning of viii, 3
Chhatrapati Shivaji train station, Mumbai 205, 205
child mortality 10, 52, 269, 333, 379
childbirth 10, 145
childrearing 10, 84, 86–7
chimpanzees viii, 4, 5, 9, 12, 25, 26, 32, 33, 34, 38, 41, 56, 111, 115, 148, 155, 158, 171, 236, 350, 383, 398, 414
China 18, 34, 48, 51, 78, 83, 103, 126, 128, 135, 144, 156, 184, 194, 196–7, 201, 239, 244, 262, 263, 280, 281, 282, 283, 290, 296, 316, 325–6, 336, 357–8, 379
chivalry 164
Christianity ix, 10, 20, 38, 109, 112, 147, 164, 165–6, 172, 173–4, 185, 186–7, 201, 215–16, 217–20, 219, 222, 223, 228, 230, 231, 236, 237, 238–9, 240, 241, 242, 244, 251, 252, 265–6, 266–7, 278, 288, 330, 331, 349, 374, 393, 413
Church, Professor George 402
Cicero 193
Claudius, Emperor 200
Cleopatra of Egypt 153, 384
Code of Hammurabi, 1776 BC 104, 105–7, 108, 110–11, 113, 120, 127, 133, 134, 182, 364
cognitive dissonance 164–6
Cognitive Revolution, The viii, 1–74, 171, 250, 355, 376, 403
coinage ix, 174, 177, 178, 180, 182–3, 183, 184, 186, 187, 209, 244, 307, 312, 319, 320, 376
Columbus, Christopher 64, 247, 272, 284, 286–7, 288, 290, 291, 292, 304, 316–17
Communism 34, 144, 165, 176, 203, 228, 229, 234, 235, 236, 242, 253, 271, 274, 333, 369, 377, 379, 413
communities:
collapse of 355–64, 382
imagined 362–4
Confucianism 223, 251, 255, 259, 264, 349
Congo Free State 332, 333
conquest, the mentality of 283–6
Constantine, Emperor 215, 238, 239, 263
consumerism 115–16, 347–9, 362, 363
Cook Islands 73
Cook, Captain James 276, 278–9, 281, 284, 301
cooperation, social 22–4, 27–8, 32–6, 37, 38–9, 46, 102–5, 119, 133, 159, 187
Copernicus, Nicolaus 275
corporations 28, 30, 32, 36, 274, 310, 322, 330, 342
Cortés, Hernan 173, 185, 291–4, 295
cowry shells 177, 177, 178, 179, 180, 183, 185, 186
credit 271, 280, 308–11, 315, 316, 317, 318, 321, 324, 326, 327–8, 329
Crusades 164
Cuba 71, 292, 295
cultures, human:
‘authentic’ 169
biological laws and 38, 146–8, 153–4
birth of 3, 18, 37, 163
clash of 169, 303–4
constant flux of 163–4
contradictions in 164–6
empires spread a common culture 197–208, 237 see also empires
global culture, emergence of a single 168–72, 237
history and 37, 163, 166–70, 237, 241–4
ideal of progress and 264–6
memetics (cultures as mental infections) 242–4
universal orders and 172, 173–236 see also under individual order
cuneiform 126, 298
cyborg engineering 399, 404–7, 406, 409, 411
Cynics 112, 223
Cyrus the Great of Persia 194–5, 196, 197
Dani, the 82
Danube Valley 60, 60n
Daoism 223, 229, 263
Darius I, King 298
Darwin, Charles 18, 234–5, 252, 258, 272, 283, 285, 302, 393, 397, 399
David, King 193
Declaration of Independence, US, 1776 18, 105, 105, 107–9, 110, 133
Defense Advanced Research Projects Agency (DARPA) 404–5
demography 48, 69–70, 88, 89, 258, 280, 305
denarius coin 183, 184
Denisova Cave, Siberia 7–8, 16, 17
Denisovans 7–8, 16, 17, 18, 19
Department of Defense, US 262
determinism 238–40
Dickens, Charles 165, 366
dinar 184
Dinka people 196
Diogenes 112
diprotodon 65, 66, 67, 68, 69, 74
DNA 4, 16, 19, 21, 41, 83, 93, 393–4, 402–3, 409–10
Dutch West Indies Company (WIC) 322
dwarfing 7
East Africa viii, 4, 5–6, 8, 13–14, 15, 20, 48, 77, 290, 296
East Asia 6, 14, 21, 140, 178, 184, 194, 218, 227, 286, 288, 316
Easter Island 73
ecological disasters 65–74, 239, 350–1
Ecuador 82, 126, 370
Edward I, King 269–70
Edward II, King 270
Egypt 75, 75, 94, 94, 103, 104, 115, 116, 116, 124, 126, 128, 153, 154, 171–2, 183, 194, 201, 202, 203, 215, 217, 241, 266, 281, 284, 326, 376, 384, 385, 385
Einstein, Albert 22, 38, 254, 338
Eisenhower, Dwight 260
El-Asad, Hafez 363
Eleanor, Queen 269–70
electricity 249, 338
elephant bird 72–3
elephants 4, 5, 7, 22, 158, 213, 224, 350, 402
elites 34, 79, 101, 102–12, 115, 116, 156, 192, 193, 194, 198, 199, 200, 201, 203, 208, 215, 232, 283, 295, 303, 313, 316, 346, 369, 374, 410
Elizabeth I, Queen 153
energy 333, 334–41, 350, 351, 376
empire:
capitalism and 315–28
common culture, spreads 195–203
cultural assimilation under 198–203
cycle, imperial 202–3
‘evil’ nature of 191–4
first ix, 103–8
global 207–8
language, emergence of and 103–12, 120–2, 126
majority of cultures as offspring of 188–9
positive legacies of 193–4, 204–7
religion and 215–16, 218, 219–20, 222, 238, 241, 242
modern, collapse of 368–70
science and 275–304
as universal order 188–9
definition 190–1
Enga 82
Epicureanism 223
equality 106–10, 113, 133, 134, 141, 159, 164–5, 202, 203, 204, 231, 233, 282, 303, 409, 410–11
Euphrates River 101
Eurasia viii, 6, 14, 67, 280
Europe viii, ix, 6, 8, 14, 15, 16, 21, 28, 35, 44, 59, 77, 130, 136, 140, 144, 150, 164, 165, 166, 167, 168, 170, 184, 185, 193, 201, 202, 203, 209, 216, 218, 231, 241, 244, 250, 252, 258, 263, 268, 269, 272, 275, 276, 277, 278–83, 284, 286, 287, 288, 289, 290–1, 294, 296, 297, 298, 299, 300, 301, 302, 303, 304, 315, 316, 317, 318, 319, 321, 322, 326, 330–1, 332, 333, 341, 348, 354, 362–3, 367, 368, 369, 370, 371, 376, 377, 379, 389, 409
European Union 363
evolution viii, 4, 8, 9, 10, 15, 16, 20, 33, 39, 40, 41, 71, 78–9, 80, 83, 93, 96, 97, 103, 109, 147, 148, 155, 157, 172, 195, 232, 233, 234–5, 236, 242, 243–4, 248, 253, 258, 262, 272, 285, 343–4, 360, 378, 386, 391, 399, 408 see also genetics
evolutionary humanism 232–6
evolutionary psychology 40, 41, 343–4
extinction viii, ix, 17, 18–19, 20, 21, 65–74, 83, 97, 232, 235, 305, 350–1, 402–4
family and the local community, collapse of 355–64, 382
famine 51–2, 87, 264, 265, 301–2, 331–2, 378, 379, 415
fictions, evolution of 24–36, 39, 45, 132, 134, 163 see also myths
Fiji 73
fire, domestication of viii, 12–13, 14, 68
First World War, 1914–18 260–1, 340–1, 365, 374
fishing villages 48, 64
Flores Island, Indonesia 7, 19, 63
food chain, man jumps to top of 11–12, 64–5, 155
France 1, 29, 30–1, 32, 38, 102, 118, 150, 156, 164, 190, 192, 194, 201, 202, 216, 220, 270, 281, 282, 283, 297, 300, 302, 303, 308, 316, 319–20, 322–5, 326, 340, 363, 364, 365, 366, 369, 370, 371, 373, 377, 388, 389, 399
Frankenstein 411–14
Franklin, Benjamin 265, 265
free market 113, 230, 328–30, 331, 356, 377
free trade 198, 326
French Empire 156, 192, 324, 369, 370
French Revolution, 1789 32, 38, 102, 164, 324, 365, 366, 377, 389
Front National 303
Galapagos Islands 74, 252, 284
game theory 243
Gandhi, Mohandas Karamchand 200–1, 369, 375
Ganges Valley 211
Gauls 183, 198, 202, 203, 289, 290
Gautama, Siddhartha 224–6, 228
gender 107, 144–59, 150, 151, 303, 407, 413
genetic programming 408–9
genetics 4, 7–8, 15–16, 17, 21, 32–6, 43, 45, 83, 84, 90, 109, 120, 140, 232, 238, 258, 267, 270, 274, 315, 380, 385, 386, 389, 398, 401–2, 403, 408–9, 410, 413, 414
genus viii, 4–5, 8
Germany 23, 34, 145, 192, 193, 198, 235, 257, 261, 281, 300, 302, 319, 340–1, 354, 362, 363, 364, 371, 405
Gilgamesh Project 266–71, 410, 414
global warming 70, 84, 207, 351
Gnosticism 222
Göbekli Tepe 89–90, 90, 91, 123
gold 173–4, 180, 182, 183, 184–6, 187, 209, 244, 297, 313, 316, 317, 319, 330, 338, 340, 372, 373
Gorbachev, Mikhail 369, 370
Great Leap Forward, 1958–61 379
Great Pyramid of Giza 116, 116
Great Survey of India 297–8
Greece 146, 153, 183, 188, 190, 191, 199, 213–14, 283, 299, 300, 302, 326–7, 326, 371
Greek Rebellion, 1821 326–7, 327
green monkeys 22, 32
Green, Charles 276
Gulf War, 1990–1 371
Gupta Empire 206, 298
Haber, Fritz 341
Habsburg Empire 190, 193
Hadrian, Emperor 200
Halley, Edmond 257
Ham, son of Noah 140
Han Empire ix, 194, 201
happiness 83, 93, 108, 109, 110, 120, 131, 243, 314, 376–96
Harlow, Harry 344–6
Harry Potter 137
Hawaii 63, 73, 168
Henry the Navigator, Prince 284
Hephaestion 146
hierarchy, principle of 25, 26, 33, 53, 54, 55, 56, 57, 107–10, 111, 114, 133–48, 155, 210, 241, 303, 342
hindsight fallacy 237–41
Hindu religion 43, 127, 130, 135, 138, 139, 143, 151, 205, 206, 214, 227, 230, 273–4, 299, 300, 406
Hispaniola 71
history:
biology and 37–9
birth of viii, 3, 37–9
direction of 163–72, 237–44
hindsight fallacy 237–41
human well-being and 241–4 see also happiness
justice in 133–59
next stage of 413–14
prediction of 237–41
timeline of viii–ix
Hitler, Adolf 234, 235, 236, 365, 374
Hittite Empire 194
Holy Grail 164
home 85–6, 98–9, 100
Homo denisova 7–8, 16, 17, 18–19
Homo erectus 6, 8, 12, 14, 33, 77
Homo ergaster 8, 77
Homo floresiensis viii, 7
Homo neanderthalensis see Neanderthals
Homo rudolfensis 6, 8
Homo sapien:
Agricultural Revolution and see Agricultural Revolution
appearance of in Africa viii, 5, 8, 13–19, 20, 21
becomes a god 415–16
Cognitive Revolution and see Cognitive Revolution
end of 397–416
global migrations viii, 5–6, 13–19, 15, 20, 21, 48, 77
other human species and 13–19, 20–36
Scientific Revolution and see Scientific Revolution
unification of humankind and 161, 163–244
as xenophobic creature 195–6
Homo soloensis 6–7, 18
Homo: evolution of genus viii, 4, 5 see also human
Hong Kong 326
Huitzilopochtli 215, 219
Human Brain Project 409
Human Enhancement question 413
human rights 28, 32, 37, 110, 111, 112, 118, 168, 198, 202, 203, 204, 207, 231, 234, 240, 363, 401, 410
humanism 230–2, 233, 234, 236, 253
humans:
appearance of 3–4, 5–8
brains of see brain
common defining characteristics of 8–9
distinct species of viii, 5–8, 6, 7 see also under individual species name
fire and cooking, discovery of 12–13
food chain, jumps to top of 11–12, 64–5, 155
other apes and viii, 5
relations between different species of 13–19, 20–36
spread from Africa to Eurasia viii, 5–6, 13–19, 20, 21, 48, 77
superhumans see superhumans
use of tools see tools, first
walk upright 9–10
see also under individual species name
hunter-gatherers/foragers 38, 40–62, 64, 78, 79, 81, 85, 89, 98, 99, 121, 167, 174–5, 211, 272, 376, 377, 378
Hussein, Saddam 363, 364
Huxley, Aldous: Brave New World 390
Iberian peninsula 13, 173, 188, 189, 199, 200
Ice Age 6, 66, 84, 167
ignorance, discovery of ix, 247–54
Iliad 127, 146
imagined communities 362–4
imagined orders 102–18, 133–4, 152, 172, 177, 363
imagined realities 31, 32, 37, 45, 112–18
Inca Empire 125, 125, 126, 128, 153, 168, 176, 292, 293, 294–6
Incitatus 95–6
India ix, 115–16, 130, 135–6, 137, 138–9, 140, 144, 145, 184, 185, 193, 196, 199–200, 202, 203, 204–6, 205, 222, 223, 243, 297–302, 331–2
individualism 113, 114, 231, 233, 359–60, 392–4
Indonesia 6–7, 13, 48, 63–4, 169, 207, 286, 290, 291, 321–2, 325, 332, 333
Indus River/Valley 101, 211, 298
Industrial Revolution ix, 74, 141, 264, 332, 335, 337, 339, 341, 346, 351, 352, 353, 355–6, 358–9, 363, 376 see also Scientific Revolution
intelligent design ix, 397–8, 399
inter-subjectivity 116–18, 152, 177, 363
Interbreeding Theory 14, 15, 16
internal combustion engine 315, 338
internet 365, 407, 408, 413
Iran 55, 77, 106, 168, 169, 194, 201, 202, 203, 374, 371
Iraq 106, 184, 194, 363, 364, 366, 370, 371, 373
Isabella of France, Queen 270
Islam ix, 172, 201, 202, 203, 209–10, 218, 219, 228, 229, 239, 242, 251, 266, 283, 296, 362, 376, 413
Israel 47, 60, 168, 217, 371, 374, 387
Jabl Sahaba, Sudan 60
Jainism 223
Japan 209, 227, 261–2, 286, 291, 296, 321, 341, 371, 402, 407
Jati (Indian caste groupings) 139
Java, Indonesia 6–7, 332
Jefferson, Thomas 111, 113
Jericho 83, 86, 103
Jerusalem 127, 193
Jesus of Nazareth 18, 217, 238, 259, 264, 265–6, 308
Jews 55, 139, 146, 192, 193, 195, 217, 218, 222, 232, 341, 365
‘Jim Crow’ laws 142
Jones, William 299–300, 302, 304
Jordan 184, 191, 363, 370
Judaea 193, 195
Judaism 217, 238
Ka’aba, Mecca 161, 209–10
Kac, Eduardo 398–9, 400
Kahneman, Daniel 390
Kalahari Desert, Africa 44, 50, 59
Karaçadag Hills 90
King, Clennon 142–3
Kipling, Rudyard 301
Kshatriyas 135
Ku Klux Klan 143
Kublai Khan 209
Kushan Empire 206
Kushim 123
Kuwait 370, 373
language, evolution of viii, ix, 19, 20–8, 21n, 44, 45, 46, 119–32, 189, 192, 194, 195, 196, 201, 256–9, 280, 298–301, 302 see also under individual language name
Lascaux Cave 56, 56, 100
Latin language 4, 6, 30, 31, 54, 81, 124, 126, 166, 189, 199, 202, 250, 299, 300, 302
Law of Large Numbers 256–7
Law, John 322–3, 324
le Pen, Marine 303
Lebanon 363, 370
legal fiction 29
Lenin, Vladimir Ilyich 228, 253, 365
Leopold II of Belgium, King 332
Levant 20, 77, 78, 85, 201
liberal humanism 230–3, 234, 236, 253
liberalism 198, 203, 228, 229, 230, 235, 271, 303, 392–4
liberty, concept of 108, 109–10, 111, 120, 133, 134, 165, 230–1
Libya 32, 200, 370
life expectancy 50–1, 268–9, 270, 333, 402, 403
life sciences 236
life-insurance 256–8
limited liability companies 29–30, 31, 32, 37, 110, 317–22, 325, 363
Lind, James 276–7
linguistics 21n, 23–4, 258, 284, 298–302 see also language
locked-in syndrome 407
Louis XIV of France, King 150
Louis XV of France, King 322–3, 324
Louis XVI of France, King 111, 264, 324, 389
luxury trap 84–8
Lydia 182, 183, 183, 184
Macedonian Empire 146, 153, 184, 188, 263
Maclaurin, Colin 256, 257
Madagascar 63, 72–3, 275, 291
Magellan, Ferdinand 168, 248, 284
Majapahit, empire of 290
Mali 209
Malthus, Robert 258
mammoths 5, 48, 56–7, 58, 67, 69, 70, 71, 74, 78, 90, 378, 402
Mandate of Heaven 196–7, 198, 207
Manhattan Island 319, 323, 323
Manhattan Project 261, 263
Manichaean creed 222, 237, 238, 241 242, 255
Manus 64
Maoris 66–7, 201, 277
maps 286–91, 289
Mari 127–8
Marquis Islands 73
marriage 382, 387–8
marsupials 65, 66, 67, 68, 69
Marx, Karl 18, 228, 229, 242, 253, 271, 283, 325
masculinity 145, 148, 150, 151, 152–3, 155
mathematics 39, 89, 121, 122, 124–6, 127, 130–2, 251, 254–9, 275, 300
Mauryan Empire 198, 206, 298
Mayan Empire 67, 292
meditation 224, 226, 394, 395
Mediterranean ix, 35, 73, 102, 103, 184, 185, 188, 191, 194, 223, 238, 279, 280, 340
Melanesians 16
memetics 242–4
Menes 171–2
Mesoamerican World 168
Mesopotamia 105, 106–7, 122, 126, 128–9, 167, 181–2, 194, 201, 209
Mexico 55, 70, 78, 170, 173, 185, 191, 291–6, 374
microorganisms 248–9, 398
Middle East viii, 14, 16, 20, 21, 77, 78, 80, 84, 85–6, 88, 103, 140, 145, 192, 222, 240, 279, 298, 299, 303, 330, 363
military–industrial–scientific complex 260, 280, 281
Ming Empire 280, 290, 357–8
Mississippi Bubble 322–5, 329
Mississippi Delta 70
Mitchell, Claudia 405, 406
Modern Times (movie) 353
Mohenjo-daro 298
money 172, 173–87 see also capitalism
Mongol Empire 166, 199, 209, 262, 263, 284, 370
monogamy 39, 41, 42, 55, 402, 403
Montezuma II 294–5, 297
moon, humans land on the, 20th July, 1969 4, 64, 248, 249, 285–6, 287, 304, 376
Mubarak, Hosni 241, 384, 385
Mughal Empire 193, 206, 280, 296, 298
Muslims 43, 144, 153, 165–6, 169, 173–4, 184, 185, 196, 198, 201, 202, 203, 206, 209–10, 222, 223, 237, 242, 281–2, 283, 297, 300, 303, 315, 316, 362, 371
Mussolini, Benito 261
mythology 24, 25, 27–8, 32–3, 36, 38, 46, 102–3, 105–6, 108, 109, 111, 113, 114–15, 116, 117, 118, 135, 138, 140, 141, 142, 144, 148–9, 157, 159, 163, 192, 212, 253, 255, 264, 267, 282, 301, 363, 412
Nader Shah 316, 317
nanotechnology 262, 270, 315
Napoleon Bonaparte 156, 284, 325, 389
Napoleon III of France 340
nationalism 25, 202, 203, 204, 205, 206, 207, 228, 230, 240, 243, 271, 325, 326, 359, 362–4, 376–7, 391
Natufian culture 85, 89, 123
natural selection ix, 10, 34, 235, 272, 302, 386, 393, 397, 398–9
Navarino, Battle of, 1827 327, 327
Nazism 34, 228, 232–6, 253, 274, 331, 354
Neanderthal Genome Project 402
Neanderthals (Homo neanderthalensis) viii, 6, 8, 12, 13–18, 17, 19, 20, 21, 23, 34–5, 36, 38, 61, 69, 77, 103, 123, 232, 398, 402–3, 404, 411, 412
Netherlands 283, 317–21, 325, 332
New Amsterdam 322, 323
New Britain 35, 64
New Caledonia 73
New Guinea 35, 78, 82, 82, 94
New Ireland 35, 64
New Testament 34, 265
New Zealand 46, 63, 66–7, 73, 168, 276, 277
Newton, Isaac 258, 283; The Mathematical Principles of Natural Philosophy 255–6
Nietzsche, Friedrich 391
Nile Valley 103, 385
Nordic gods 214
North Africa 6, 173, 174, 218, 222, 262, 279
North America 59, 67, 71, 78, 80, 108, 168, 170, 275, 325
Nü Wa 135
nuclear family 39, 41, 42, 45, 55, 56, 360, 362
nuclear physics/weapons ix, 24, 38, 39, 169, 243, 245, 245, 249, 259, 272–3, 335, 338, 339, 351, 369, 372, 374, 379, 412
Nuer tribe 95, 196
Numantia 188–9, 191, 192, 199, 263
Nurhaci 316, 317
Obama, Barack 151, 199, 384
obsidian 35, 36, 174, 211
Oceanic World 168
Ofnet Cave, Bavaria 60
Old Testament 182, 222, 223
Olympias of Macedon, Queen 146
Opium War, First, 1840–2 325–6
Oppenheimer, Robert 245, 372
organisms, emergence of viii, 3
Orwell, George: Nineteen Eighty–Four 390
Ottoman Empire 138, 153, 280, 281, 296, 297, 316, 326, 327, 357, 370
‘Outer World’ 63, 66, 279
Pakistan 221, 243, 371
patriarchy 5, 152–9, 355
Patroclus 146
Paul of Tarsus 217, 218, 393
peaceful era, modern times as 366–75
permanent settlements, emergence of viii, 48, 52, 85–6, 87, 98, 103
Persian Empire ix, 103, 195, 196, 199, 222, 281–2, 283, 290, 298
personalised medicine 409–10
Peugeot 28–31, 29, 105, 117–18
Peugeot, Armand 30, 31
Philip, Emperor 200
Philip of Macedon, King 146
Philippines 284
physics, beginning of viii, 3, 21
Pius, Emperor Antonius 200
Pizarro, Francisco 292, 295–6
plants:
domestication of 77–8, 80, 81, 83, 85, 97, 100, 211–12
genetic engineering of 401
mechanisation of 341–2
Polynesians 73, 283
population numbers 47, 59, 79, 83, 84, 86, 87, 98, 190–1, 247, 277, 292, 295, 301, 305, 333, 336, 351
Portugal 59–60, 194, 198, 283, 284, 297, 316, 318
postmodernism 243
poverty 130–1, 143, 214, 224, 264, 265–6, 332, 382, 384
progress, the ideal of 264–7
Protestant Church 137, 216, 318
Purusa 135
Qin dynasty 103, 105, 357
Qing dynasty 280, 316
Qín Shĭ Huángdì 197
quantum mechanics 21, 131, 252, 256, 258, 259
quipu 125–6, 125, 127
race 15, 134–5, 135, 138, 140–4, 145, 196, 232–6, 234, 241, 253, 277, 301, 302–4
railways 205, 205, 281–2, 337–8, 354
raw materials 48, 50, 92, 180, 333, 334–6, 339–41, 342, 364
Rawlinson, Henry 298–9, 300
religion 18
Agricultural Revolution and 90–1, 90, 211–13
animism and see animism
birth of 18, 24, 54–5, 60
definition of 210
dualism 220–3
free will and 220–1
happiness and 394–6
hierarchies and 138–9
hindsight fallacy and 238–9
humanist 228–36
hunter-gatherers and 54–5
language and emergence of 24–5, 27, 28
local and exclusive 210–11
monotheistic 213, 214, 215, 217–21, 222, 223, 227, 230, 231–2, 237, 238,
239, 401
natural-law religions 223–7, 228
patron saints 219–20
polytheistic viii, ix, 212–16, 217, 218–20, 223, 238
Problem of Evil 220, 221
science and 252, 253, 254, 255, 266–7, 271, 272, 273, 274, 348, 349
syncretism 223, 230
see also mythology and under individual religion name
Replacement Theory 14–16
research, funding of scientific 271–4
Roman Empire ix, 18, 55, 95–6, 102, 103, 104, 105, 153, 167, 183, 184, 188–9, 192, 193–4, 196, 198, 199, 200–1, 202, 203, 208, 214, 215, 216, 218, 219, 222, 238, 239, 244, 262–3, 279, 284, 289–90, 291, 302, 370
romantic consumerism 115–16
Rousseau, Jacques 393
Royal Navy 153, 277, 284
Royal Society 275–6, 277
sacrifice, human 52, 58
Safavid Empire 280, 296
Salviati World Map, 1525 289
Samarkand 209
Samoa 73
Sargon the Great ix, 103, 194–5
Sassanid Persian Empire 222, 241, 242, 262, 354
Scientific Revolution ix, 3, 244, 245, 247–414
Scotland 193, 220, 256–8, 296
Scottish Widows 256–8
script, partial and full 124–7, 124, 130
scurvy 276–7
sea levels 7, 64, 69, 167, 351
seafaring societies, first 63–4
Second World War, 1939–45 234, 261–2, 326, 354, 366
Seleucid Empire 188, 263
Seneca 193
Severus, Emperor Septimius 200
sexual relations 4, 14–15, 16, 17, 34, 35, 40, 41–2, 43, 45, 84, 93, 141, 143, 145, 146, 147, 148–9, 150, 151, 152, 153, 159, 179, 186, 226, 232, 360–1, 377, 386, 393, 400, 411
Shaw, George Bernard: Pygmalion 136–7
shells, trading in 35, 36, 103, 174, 177, 178, 179, 180
Shudras 135, 136, 144
Siberia 7–8, 16, 62, 67, 69, 70, 196, 275, 340, 402
silver shekel 107, 120, 182
singularity 409–11
skeleton, effect of Agricultural Revolution on human 10, 80–1
slavery 74, 96, 104, 106, 107, 110, 111, 120, 133, 134, 136, 138, 140–3, 154, 156, 181, 182, 186, 188, 209, 279, 292, 305, 330–1, 332, 333, 338, 343, 361, 372
Smith, Adam 113, 271, 283, 311–12, 329; The Wealth of Nations 311–12, 329
social structure 11, 21, 25, 34, 41, 60, 61, 77, 94, 120, 144, 282, 342, 364, 396, 402 see also hierarchies
socialist humanism 231–2, 233
Solander, Daniel 276
Solomon Islands 73
Solzhenitsyn, Alexander 165
Song Empire 263
South Africa 78, 135, 135, 194, 200, 218, 275
South America 71, 78, 126, 168, 284, 291–6, 318, 370, 371, 374
Soviet Union 176, 196, 198, 272–3, 369–70
Spain 125, 126, 168, 170, 173, 189, 194, 198, 201, 247, 248, 283, 284, 286, 291–6, 297, 298, 312, 316–21
species, classification of 4–5
St Bartholomew’s Day Massacre, 1572 216
Stadel Cave: lion-man in 21, 23, 23, 28, 32, 39, 400
statistics 256–9, 269, 366
Stoicism 223
Stone Age 12, 39, 41, 42, 55, 67, 377
Sudan 60, 95, 196
Suez Canal 326
Sullivan, Jesse 405, 406
Sumer/Sumerians 122–6, 128, 129n, 181, 267, 308
Sungir, Russia 56–8
superhuman order 210, 223, 228, 229
superhumans ix, 233, 236, 403, 410
Syria 42, 106, 183, 194, 200, 201, 209, 363, 364, 370, 371
Tacitus 193–4
Tahiti 276
Taj Mahal 193, 206, 206
Talmud 193
Tasmania 163, 167, 168, 277, 278, 279, 379
taxes 29, 103, 104, 120, 121, 124, 126, 127, 129, 165, 174, 178, 180, 183, 191, 198, 220, 241, 249, 273, 309, 314, 316, 320, 328, 336, 357, 358, 359, 373
Tenochtitlan 294, 295
Teotihuacan 167
Theism 55 see also monotheism; polytheism and dualism
Theory of Relativity 131, 229, 256
thought-control 406–7
Tierra del Fuego 70
time, modern 352–5
Toltecs 292
Tonga 73
tools, first stone viii, 7, 9–10, 11, 14, 19, 20, 33, 35, 38, 42, 61, 85
tourism 115–16
trade 35–6, 38, 46–7, 64, 103, 119, 140, 141, 170, 172, 175, 176, 177, 178, 181, 185, 186, 198, 212, 240, 272, 289, 310, 313, 316, 318, 319, 321, 322, 326, 330, 331–2, 333, 343, 373–4, 415
Trajan, Emperor 200
Truganini 278, 279, 279, 379
Turkey 77, 89, 194, 201, 203, 209, 283, 327, 338, 371
unification of humankind, the 161, 163–244
United Nations 32, 370
United States 4, 18, 30, 43, 64, 105, 105, 107–9, 110, 111, 112, 118, 133, 134, 138, 140–3, 144, 145, 150, 151, 165, 172, 184, 196, 197–8, 199, 230, 234, 239, 248, 249, 261–2, 272–3, 280, 281, 282, 285–6, 297, 304, 346, 348, 369, 376, 381, 384, 405–7 see also America
US Civil War, 1861–5 111, 141
US Constitution 141
universal orders 55, 170–2 see also money; empires and religion
University of Mississippi 142–3
V-2 rocket 261, 261
Vaishyas 135
Valence, Emperor 167
van Leeuwenhoek, Anton 248–9
Vasco de Gama 284
Vereenigde Osstindische Compagnie (VOC) 321–2, 325, 332
Verne, Jules 248
Vespucci, Amerigo 287–8
Victoria, Queen 326
Vietnam War, 1956–75 297, 369
Voltaire 111
Waldseemüller, Martin 288
Wall Street 38, 322, 323, 323, 374
Wallace, Robert 256–8
war, disappearance of international 370–5
Waterloo, battle of, 1815 268
weaponry 260–4, 277
Webster, Alexander 256–8
wheat, Agricultural Revolution and 12, 51, 77, 78, 79, 80–1, 83–8, 89–91, 97, 179, 183, 313, 336
women:
Agricultural Revolution and 86
hierarchies and 133, 134, 143, 144–5, 146, 147, 148–59
hunter-gatherer 10, 41, 51, 52–3, 72, 80, 84, 86
liberation of the individual and 360
sex and gender 148–59
Wrangel Island, Arctic Ocean 67
writing, evolution of 119–32
Wu Zetian of China, Empress 153
Yoruba religion 214
Yupik 196
Zheng He, Admiral 290–1, 296
Zimrilim of Mari, King 127
Zoroastrianism 221–2, 237, 238
Zulu Empire 194
For their advice and assistance, thanks to: Sarai Aharoni, Dorit Aharonov, Amos Avisar, Tzafrir Barzilai, Katherine Beitner, Noah Beninga, Suzanne Dean, Tirza Eisenberg, Amir Fink, Einat Harari, Liat Harari, Pnina Harari, Sara Holloway, Benjamin Z. Kedar, Yossi Maurey, Eyal Miller, David Milner, John Purcell, Simon Rhodes, Shmuel Rosner, Rami Rotholz, Michal Shavit, Michael Shenkar, Ellie Steel, Ofer Steinitz, Claire Wachtel, Hannah Wood, Guy Zaslavsky and all the teachers and students in the World History programme of the Hebrew University of Jerusalem.
I am particularly indebted to Idan Sherer, my devoted research assistant, and to Haim Watzman, a master wordsmith, who enriched my English with style, punch and humor, and who helped hone my arguments for publication outside Israel.
Special thanks to Jared Diamond, who taught me to see the big picture; to Diego Olstein, who inspired me to write a story; and to Itzik Yahav and Deborah Harris, who helped spread the story around.
Dr. YUVAL NOAH HARARI has a PhD in history from the University of Oxford and now lectures at the Department of History, the Hebrew University of Jerusalem, specializing in world history. His research focuses on broad historical questions, such as: What is the relation between history and biology? Is there justice in history? Did people become happier as history unfolded?
Sapiens was originally published in Israel in 2011, quickly becoming a huge bestseller. It is now being translated into more than twenty languages. Thousands of people have taken Dr. Harari’s online course, A Brief History of Humankind, and his YouTube lectures have drawn hundreds of thousands of views worldwide. In 2012 he was awarded the Polonsky Prize for Creativity and Originality in the Humanistic Disciplines.
Discover great authors, exclusive offers, and more at hc.com.
COVER DESIGN © SUZANNE DEAN
SAPIENS. Copyright © 2015 by Yuval Noah Harari. All rights reserved under International and Pan-American Copyright Conventions. By payment of the required fees, you have been granted the nonexclusive, nontransferable right to access and read the text of this e-book on-screen. No part of this text may be reproduced, transmitted, downloaded, decompiled, reverse-engineered, or stored in or introduced into any information storage and retrieval system, in any form or by any means, whether electronic or mechanical, now known or hereafter invented, without the express written permission of HarperCollins e-books.
Translated by the author, with the help of John Purcell and Haim Watzman.
First published in Hebrew in Israel in 2011 by Kinneret, Zmora-Bitan, Dvir.
Previously published in a slightly different form in Great Britain in 2014 by Harvill Secker, a division of the Random House Group Ltd.
FIRST U.S. EDITION
ISBN: 978-0-06-231609-7
EPub Edition February 2015 ISBN 9780062316103
Australia
HarperCollins Publishers Australia Pty. Ltd.
Level 13, 201 Elizabeth Street
Sydney, NSW 2000, Australia
Canada
HarperCollins Canada
2 Bloor Street East - 20th Floor
Toronto, ON M4W 1A8, Canada
New Zealand
HarperCollins Publishers New Zealand
Unit D1, 63 Apollo Drive
Rosedale 0632
Auckland, New Zealand
United Kingdom
HarperCollins Publishers Ltd.
1 London Bridge Street
London W6 8JB, UK
United States
HarperCollins Publishers Inc.
195 Broadway
New York, NY 10007
* Here and in the following pages, when speaking about Sapiens language, I refer to the basic linguistic abilities of our species and not to a particular dialect. English, Hindi and Chinese are all variants of Sapiens language. Apparently, even at the time of the Cognitive Revolution, different Sapiens groups had different dialects.
* A ‘horizon of possibilities’ means the entire spectrum of beliefs, practices and experiences that are open before a particular society, given its ecological, technological and cultural limitations. Each society and each individual usually explore only a tiny fraction of their horizon of possibilities.
* It might be argued that not all eighteen ancient Danubians actually died from the violence whose marks can be seen on their remains. Some were only injured. However, this is probably counterbalanced by deaths from trauma to soft tissues and from the invisible deprivations that accompany war.
* Even after Akkadian became the spoken language, Sumerian remained the language of administration and thus the language recorded with writing. Aspiring scribes thus had to speak Sumerian.
* An ‘intimate community’ is a group of people who know one another well and depend on each other for survival.
* Paradoxically, while psychological studies of subjective well-being rely on people’s ability to diagnose their happiness correctly, the basic raison d’être of psychotherapy is that people don’t really know themselves and that they sometimes need professional help to free themselves of self-destructive behaviours.
To my teacher, S. N. Goenka (1924–2013),
who lovingly taught me important things.
1. In vitro fertilisation: mastering creation.
1. Computer artwork © KTSDESIGN/Science Photo Library.
At the dawn of the third millennium, humanity wakes up, stretching its limbs and rubbing its eyes. Remnants of some awful nightmare are still drifting across its mind. ‘There was something with barbed wire, and huge mushroom clouds. Oh well, it was just a bad dream.’ Going to the bathroom, humanity washes its face, examines its wrinkles in the mirror, makes a cup of coffee and opens the diary. ‘Let’s see what’s on the agenda today.’
For thousands of years the answer to this question remained unchanged. The same three problems preoccupied the people of twentieth-century China, of medieval India and of ancient Egypt. Famine, plague and war were always at the top of the list. For generation after generation humans have prayed to every god, angel and saint, and have invented countless tools, institutions and social systems – but they continued to die in their millions from starvation, epidemics and violence. Many thinkers and prophets concluded that famine, plague and war must be an integral part of God’s cosmic plan or of our imperfect nature, and nothing short of the end of time would free us from them.
Yet at the dawn of the third millennium, humanity wakes up to an amazing realisation. Most people rarely think about it, but in the last few decades we have managed to rein in famine, plague and war. Of course, these problems have not been completely solved, but they have been transformed from incomprehensible and uncontrollable forces of nature into manageable challenges. We don’t need to pray to any god or saint to rescue us from them. We know quite well what needs to be done in order to prevent famine, plague and war – and we usually succeed in doing it.
True, there are still notable failures; but when faced with such failures we no longer shrug our shoulders and say, ‘Well, that’s the way things work in our imperfect world’ or ‘God’s will be done’. Rather, when famine, plague or war break out of our control, we feel that somebody must have screwed up, we set up a commission of inquiry, and promise ourselves that next time we’ll do better. And it actually works. Such calamities indeed happen less and less often. For the first time in history, more people die today from eating too much than from eating too little; more people die from old age than from infectious diseases; and more people commit suicide than are killed by soldiers, terrorists and criminals combined. In the early twenty-first century, the average human is far more likely to die from bingeing at McDonald’s than from drought, Ebola or an al-Qaeda attack.
Hence even though presidents, CEOs and generals still have their daily schedules full of economic crises and military conflicts, on the cosmic scale of history humankind can lift its eyes up and start looking towards new horizons. If we are indeed bringing famine, plague and war under control, what will replace them at the top of the human agenda? Like firefighters in a world without fire, so humankind in the twenty-first century needs to ask itself an unprecedented question: what are we going to do with ourselves? In a healthy, prosperous and harmonious world, what will demand our attention and ingenuity? This question becomes doubly urgent given the immense new powers that biotechnology and information technology are providing us with. What will we do with all that power?
Before answering this question, we need to say a few more words about famine, plague and war. The claim that we are bringing them under control may strike many as outrageous, extremely naïve, or perhaps callous. What about the billions of people scraping a living on less than $2 a day? What about the ongoing AIDS crisis in Africa, or the wars raging in Syria and Iraq? To address these concerns, let us take a closer look at the world of the early twenty-first century, before exploring the human agenda for the coming decades.
Let’s start with famine, which for thousands of years has been humanity’s worst enemy. Until recently most humans lived on the very edge of the biological poverty line, below which people succumb to malnutrition and hunger. A small mistake or a bit of bad luck could easily be a death sentence for an entire family or village. If heavy rains destroyed your wheat crop, or robbers carried off your goat herd, you and your loved ones may well have starved to death. Misfortune or stupidity on the collective level resulted in mass famines. When severe drought hit ancient Egypt or medieval India, it was not uncommon that 5 or 10 per cent of the population perished. Provisions became scarce; transport was too slow and expensive to import sufficient food; and governments were far too weak to save the day.
Open any history book and you are likely to come across horrific accounts of famished populations, driven mad by hunger. In April 1694 a French official in the town of Beauvais described the impact of famine and of soaring food prices, saying that his entire district was now filled with ‘an infinite number of poor souls, weak from hunger and wretchedness and dying from want, because, having no work or occupation, they lack the money to buy bread. Seeking to prolong their lives a little and somewhat to appease their hunger, these poor folk eat such unclean things as cats and the flesh of horses flayed and cast onto dung heaps. [Others consume] the blood that flows when cows and oxen are slaughtered, and the offal that cooks throw into the streets. Other poor wretches eat nettles and weeds, or roots and herbs which they boil in water.’1
Similar scenes took place all over France. Bad weather had ruined the harvests throughout the kingdom in the previous two years, so that by the spring of 1694 the granaries were completely empty. The rich charged exorbitant prices for whatever food they managed to hoard, and the poor died in droves. About 2.8 million French – 15 per cent of the population – starved to death between 1692 and 1694, while the Sun King, Louis XIV, was dallying with his mistresses in Versailles. The following year, 1695, famine struck Estonia, killing a fifth of the population. In 1696 it was the turn of Finland, where a quarter to a third of people died. Scotland suffered from severe famine between 1695 and 1698, some districts losing up to 20 per cent of their inhabitants.2
Most readers probably know how it feels when you miss lunch, when you fast on some religious holiday, or when you live for a few days on vegetable shakes as part of a new wonder diet. But how does it feel when you haven’t eaten for days on end and you have no clue where to get the next morsel of food? Most people today have never experienced this excruciating torment. Our ancestors, alas, knew it only too well. When they cried to God, ‘Deliver us from famine!’, this is what they had in mind.
During the last hundred years, technological, economic and political developments have created an increasingly robust safety net separating humankind from the biological poverty line. Mass famines still strike some areas from time to time, but they are exceptional, and they are almost always caused by human politics rather than by natural catastrophes. There are no longer natural famines in the world; there are only political famines. If people in Syria, Sudan or Somalia starve to death, it is because some politician wants them to.
In most parts of the planet, even if a person has lost his job and all of his possessions, he is unlikely to die from hunger. Private insurance schemes, government agencies and international NGOs may not rescue him from poverty, but they will provide him with enough daily calories to survive. On the collective level, the global trade network turns droughts and floods into business opportunities, and makes it possible to overcome food shortages quickly and cheaply. Even when wars, earthquakes or tsunamis devastate entire countries, international efforts usually succeed in preventing famine. Though hundreds of millions still go hungry almost every day, in most countries very few people actually starve to death.
Poverty certainly causes many other health problems, and malnutrition shortens life expectancy even in the richest countries on earth. In France, for example, 6 million people (about 10 per cent of the population) suffer from nutritional insecurity. They wake up in the morning not knowing whether they will have anything to eat for lunch; they often go to sleep hungry; and the nutrition they do obtain is unbalanced and unhealthy – lots of starch, sugar and salt, and not enough protein and vitamins.3 Yet nutritional insecurity isn’t famine, and France of the early twenty-first century isn’t France of 1694. Even in the worst slums around Beauvais or Paris, people don’t die because they have not eaten for weeks on end.
The same transformation has occurred in numerous other countries, most notably China. For millennia, famine stalked every Chinese regime from the Yellow Emperor to the Red communists. A few decades ago China was a byword for food shortages. Tens of millions of Chinese starved to death during the disastrous Great Leap Forward, and experts routinely predicted that the problem would only get worse. In 1974 the first World Food Conference was convened in Rome, and delegates were treated to apocalyptic scenarios. They were told that there was no way for China to feed its billion people, and that the world’s most populous country was heading towards catastrophe. In fact, it was heading towards the greatest economic miracle in history. Since 1974 hundreds of millions of Chinese have been lifted out of poverty, and though hundreds of millions more still suffer greatly from privation and malnutrition, for the first time in its recorded history China is now free from famine.
Indeed, in most countries today overeating has become a far worse problem than famine. In the eighteenth century Marie Antoinette allegedly advised the starving masses that if they ran out of bread, they should just eat cake instead. Today, the poor are following this advice to the letter. Whereas the rich residents of Beverly Hills eat lettuce salad and steamed tofu with quinoa, in the slums and ghettos the poor gorge on Twinkie cakes, Cheetos, hamburgers and pizza. In 2014 more than 2.1 billion people were overweight, compared to 850 million who suffered from malnutrition. Half of humankind is expected to be overweight by 2030.4 In 2010 famine and malnutrition combined killed about 1 million people, whereas obesity killed 3 million.5
After famine, humanity’s second great enemy was plagues and infectious diseases. Bustling cities linked by a ceaseless stream of merchants, officials and pilgrims were both the bedrock of human civilisation and an ideal breeding ground for pathogens. People consequently lived their lives in ancient Athens or medieval Florence knowing that they might fall ill and die next week, or that an epidemic might suddenly erupt and destroy their entire family in one swoop.
2. Medieval people personified the Black Death as a horrific demonic force beyond human control or comprehension.
2. The Triumph of Death, c.1562, Bruegel, Pieter the Elder © The Art Archive/Alamy Stock Photo.
The most famous such outbreak, the so-called Black Death, began in the 1330s, somewhere in east or central Asia, when the flea-dwelling bacterium Yersinia pestis started infecting humans bitten by the fleas. From there, riding on an army of rats and fleas, the plague quickly spread all over Asia, Europe and North Africa, taking less than twenty years to reach the shores of the Atlantic Ocean. Between 75 million and 200 million people died – more than a quarter of the population of Eurasia. In England, four out of ten people died, and the population dropped from a pre-plague high of 3.7 million people to a post-plague low of 2.2 million. The city of Florence lost 50,000 of its 100,000 inhabitants.6
The authorities were completely helpless in the face of the calamity. Except for organising mass prayers and processions, they had no idea how to stop the spread of the epidemic – let alone cure it. Until the modern era, humans blamed diseases on bad air, malicious demons and angry gods, and did not suspect the existence of bacteria and viruses. People readily believed in angels and fairies, but they could not imagine that a tiny flea or a single drop of water might contain an entire armada of deadly predators.
3. The real culprit was the minuscule Yersinia pestis bacterium.
3. © NIAID/CDC/Science Photo Library.
The Black Death was not a singular event, nor even the worst plague in history. More disastrous epidemics struck America, Australia and the Pacific Islands following the arrival of the first Europeans. Unbeknown to the explorers and settlers, they brought with them new infectious diseases against which the natives had no immunity. Up to 90 per cent of the local populations died as a result.7
On 5 March 1520 a small Spanish flotilla left the island of Cuba on its way to Mexico. The ships carried 900 Spanish soldiers along with horses, firearms and a few African slaves. One of the slaves, Francisco de Eguía, carried on his person a far deadlier cargo. Francisco didn’t know it, but somewhere among his trillions of cells a biological time bomb was ticking: the smallpox virus. After Francisco landed in Mexico the virus began to multiply exponentially within his body, eventually bursting out all over his skin in a terrible rash. The feverish Francisco was taken to bed in the house of a Native American family in the town of Cempoallan. He infected the family members, who infected the neighbours. Within ten days Cempoallan became a graveyard. Refugees spread the disease from Cempoallan to the nearby towns. As town after town succumbed to the plague, new waves of terrified refugees carried the disease throughout Mexico and beyond.
The Mayas in the Yucatán Peninsula believed that three evil gods – Ekpetz, Uzannkak and Sojakak – were flying from village to village at night, infecting people with the disease. The Aztecs blamed it on the gods Tezcatlipoca and Xipetotec, or perhaps on the black magic of the white people. Priests and doctors were consulted. They advised prayers, cold baths, rubbing the body with bitumen and smearing squashed black beetles on the sores. Nothing helped. Tens of thousands of corpses lay rotting in the streets, without anyone daring to approach and bury them. Entire families perished within a few days, and the authorities ordered that the houses were to be collapsed on top of the bodies. In some settlements half the population died.
In September 1520 the plague had reached the Valley of Mexico, and in October it entered the gates of the Aztec capital, Tenochtitlan – a magnificent metropolis of 250,000 people. Within two months at least a third of the population perished, including the Aztec emperor Cuitláhuac. Whereas in March 1520, when the Spanish fleet arrived, Mexico was home to 22 million people, by December only 14 million were still alive. Smallpox was only the first blow. While the new Spanish masters were busy enriching themselves and exploiting the natives, deadly waves of flu, measles and other infectious diseases struck Mexico one after the other, until in 1580 its population was down to less than 2 million.8
Two centuries later, on 18 January 1778, the British explorer Captain James Cook reached Hawaii. The Hawaiian islands were densely populated by half a million people, who lived in complete isolation from both Europe and America, and consequently had never been exposed to European and American diseases. Captain Cook and his men introduced the first flu, tuberculosis and syphilis pathogens to Hawaii. Subsequent European visitors added typhoid and smallpox. By 1853, only 70,000 survivors remained in Hawaii.9
Epidemics continued to kill tens of millions of people well into the twentieth century. In January 1918 soldiers in the trenches of northern France began dying in the thousands from a particularly virulent strain of flu, nicknamed ‘the Spanish Flu’. The front line was the end point of the most efficient global supply network the world had hitherto seen. Men and munitions were pouring in from Britain, the USA, India and Australia. Oil was sent from the Middle East, grain and beef from Argentina, rubber from Malaya and copper from Congo. In exchange, they all got Spanish Flu. Within a few months, about half a billion people – a third of the global population – came down with the virus. In India it killed 5 per cent of the population (15 million people). On the island of Tahiti, 14 per cent died. On Samoa, 20 per cent. In the copper mines of the Congo one out of five labourers perished. Altogether the pandemic killed between 50 million and 100 million people in less than a year. The First World War killed 40 million from 1914 to 1918.10
Alongside such epidemical tsunamis that struck humankind every few decades, people also faced smaller but more regular waves of infectious diseases, which killed millions every year. Children who lacked immunity were particularly susceptible to them, hence they are often called ‘childhood diseases’. Until the early twentieth century, about a third of children died before reaching adulthood from a combination of malnutrition and disease.
During the last century humankind became ever more vulnerable to epidemics, due to a combination of growing populations and better transport. A modern metropolis such as Tokyo or Kinshasa offers pathogens far richer hunting grounds than medieval Florence or 1520 Tenochtitlan, and the global transport network is today even more efficient than in 1918. A Spanish virus can make its way to Congo or Tahiti in less than twenty-four hours. We should therefore have expected to live in an epidemiological hell, with one deadly plague after another.
However, both the incidence and impact of epidemics have gone down dramatically in the last few decades. In particular, global child mortality is at an all-time low: less than 5 per cent of children die before reaching adulthood. In the developed world the rate is less than 1 per cent.11 This miracle is due to the unprecedented achievements of twentieth-century medicine, which has provided us with vaccinations, antibiotics, improved hygiene and a much better medical infrastructure.
For example, a global campaign of smallpox vaccination was so successful that in 1979 the World Health Organization declared that humanity had won, and that smallpox had been completely eradicated. It was the first epidemic humans had ever managed to wipe off the face of the earth. In 1967 smallpox had still infected 15 million people and killed 2 million of them, but in 2014 not a single person was either infected or killed by smallpox. The victory has been so complete that today the WHO has stopped vaccinating humans against smallpox.12
Every few years we are alarmed by the outbreak of some potential new plague, such as SARS in 2002/3, bird flu in 2005, swine flu in 2009/10 and Ebola in 2014. Yet thanks to efficient counter-measures these incidents have so far resulted in a comparatively small number of victims. SARS, for example, initially raised fears of a new Black Death, but eventually ended with the death of less than 1,000 people worldwide.13 The Ebola outbreak in West Africa seemed at first to spiral out of control, and on 26 September 2014 the WHO described it as ‘the most severe public health emergency seen in modern times’.14 Nevertheless, by early 2015 the epidemic had been reined in, and in January 2016 the WHO declared it over. It infected 30,000 people (killing 11,000 of them), caused massive economic damage throughout West Africa, and sent shockwaves of anxiety across the world; but it did not spread beyond West Africa, and its death toll was nowhere near the scale of the Spanish Flu or the Mexican smallpox epidemic.
Even the tragedy of AIDS, seemingly the greatest medical failure of the last few decades, can be seen as a sign of progress. Since its first major outbreak in the early 1980s, more than 30 million people have died of AIDS, and tens of millions more have suffered debilitating physical and psychological damage. It was hard to understand and treat the new epidemic, because AIDS is a uniquely devious disease. Whereas a human infected with the smallpox virus dies within a few days, an HIV-positive patient may seem perfectly healthy for weeks and months, yet go on infecting others unknowingly. In addition, the HIV virus itself does not kill. Rather, it destroys the immune system, thereby exposing the patient to numerous other diseases. It is these secondary diseases that actually kill AIDS victims. Consequently, when AIDS began to spread, it was especially difficult to understand what was happening. When two patients were admitted to a New York hospital in 1981, one ostensibly dying from pneumonia and the other from cancer, it was not at all evident that both were in fact victims of the HIV virus, which may have infected them months or even years previously.15
However, despite these difficulties, after the medical community became aware of the mysterious new plague, it took scientists just two years to identify it, understand how the virus spreads and suggest effective ways to slow down the epidemic. Within another ten years new medicines turned AIDS from a death sentence into a chronic condition (at least for those wealthy enough to afford the treatment).16 Just think what would have happened if AIDS had erupted in 1581 rather than 1981. In all likelihood, nobody back then would have figured out what caused the epidemic, how it moved from person to person, or how it could be halted (let alone cured). Under such conditions, AIDS might have killed a much larger proportion of the human race, equalling and perhaps even surpassing the Black Death.
Despite the horrendous toll AIDS has taken, and despite the millions killed each year by long-established infectious diseases such as malaria, epidemics are a far smaller threat to human health today than in previous millennia. The vast majority of people die from non-infectious illnesses such as cancer and heart disease, or simply from old age.17 (Incidentally cancer and heart disease are of course not new illnesses – they go back to antiquity. In previous eras, however, relatively few people lived long enough to die from them.)
Many fear that this is only a temporary victory, and that some unknown cousin of the Black Death is waiting just around the corner. No one can guarantee that plagues won’t make a comeback, but there are good reasons to think that in the arms race between doctors and germs, doctors run faster. New infectious diseases appear mainly as a result of chance mutations in pathogen genomes. These mutations allow the pathogens to jump from animals to humans, to overcome the human immune system, or to resist medicines such as antibiotics. Today such mutations probably occur and disseminate faster than in the past, due to human impact on the environment.18 Yet in the race against medicine, pathogens ultimately depend on the blind hand of fortune.
Doctors, in contrast, count on more than mere luck. Though science owes a huge debt to serendipity, doctors don’t just throw different chemicals into test tubes, hoping to chance upon some new medicine. With each passing year doctors accumulate more and better knowledge, which they use in order to design more effective medicines and treatments. Consequently, though in 2050 we will undoubtedly face much more resilient germs, medicine in 2050 will likely be able to deal with them more efficiently than today.19
In 2015 doctors announced the discovery of a completely new type of antibiotic – teixobactin – to which bacteria have no resistance as yet. Some scholars believe teixobactin may prove to be a game-changer in the fight against highly resistant germs.20 Scientists are also developing revolutionary new treatments that work in radically different ways to any previous medicine. For example, some research labs are already home to nano-robots, which may one day navigate through our bloodstream, identify illnesses and kill pathogens and cancerous cells.21 Microorganisms may have 4 billion years of cumulative experience fighting organic enemies, but they have exactly zero experience fighting bionic predators, and would therefore find it doubly difficult to evolve effective defences.
So while we cannot be certain that some new Ebola outbreak or an unknown flu strain won’t sweep across the globe and kill millions, we will not regard it as an inevitable natural calamity. Rather, we will see it as an inexcusable human failure and demand the heads of those responsible. When in late summer 2014 it seemed for a few terrifying weeks that Ebola was gaining the upper hand over the global health authorities, investigative committees were hastily set up. An initial report published on 18 October 2014 criticised the World Health Organization for its unsatisfactory reaction to the outbreak, blaming the epidemic on corruption and inefficiency in the WHO’s African branch. Further criticism was levelled at the international community as a whole for not responding quickly and forcefully enough. Such criticism assumes that humankind has the knowledge and tools to prevent plagues, and if an epidemic nevertheless gets out of control, it is due to human incompetence rather than divine anger. Similarly, the fact that AIDS continued to infect and kill millions in sub-Saharan Africa years after doctors had understood its mechanisms is rightly seen as the result of human failings rather than of cruel fortune.
So in the struggle against natural calamities such as AIDS and Ebola, the scales are tipping in humanity’s favour. But what about the dangers inherent in human nature itself? Biotechnology enables us to defeat bacteria and viruses, but it simultaneously turns humans themselves into an unprecedented threat. The same tools that enable doctors to quickly identify and cure new illnesses may also enable armies and terrorists to engineer even more terrible diseases and doomsday pathogens. It is therefore likely that major epidemics will continue to endanger humankind in the future only if humankind itself creates them, in the service of some ruthless ideology. The era when humankind stood helpless before natural epidemics is probably over. But we may come to miss it.
The third piece of good news is that wars too are disappearing. Throughout history most humans took war for granted, whereas peace was a temporary and precarious state. International relations were governed by the Law of the Jungle, according to which even if two polities lived in peace, war always remained an option. For example, even though Germany and France were at peace in 1913, everybody knew that they might be at each other’s throats in 1914. Whenever politicians, generals, business people and ordinary citizens made plans for the future, they always left room for war. From the Stone Age to the age of steam, and from the Arctic to the Sahara, every person on earth knew that at any moment the neighbours might invade their territory, defeat their army, slaughter their people and occupy their land.
During the second half of the twentieth century this Law of the Jungle has finally been broken, if not rescinded. In most areas wars became rarer than ever. Whereas in ancient agricultural societies human violence caused about 15 per cent of all deaths, during the twentieth century violence caused only 5 per cent of deaths, and in the early twenty-first century it is responsible for about 1 per cent of global mortality.22 In 2012 about 56 million people died throughout the world; 620,000 of them died due to human violence (war killed 120,000 people, and crime killed another 500,000). In contrast, 800,000 committed suicide, and 1.5 million died of diabetes.23 Sugar is now more dangerous than gunpowder.
Even more importantly, a growing segment of humankind has come to see war as simply inconceivable. For the first time in history, when governments, corporations and private individuals consider their immediate future, many of them don’t think about war as a likely event. Nuclear weapons have turned war between superpowers into a mad act of collective suicide, and therefore forced the most powerful nations on earth to find alternative and peaceful ways to resolve conflicts. Simultaneously, the global economy has been transformed from a material-based economy into a knowledge-based economy. Previously the main sources of wealth were material assets such as gold mines, wheat fields and oil wells. Today the main source of wealth is knowledge. And whereas you can conquer oil fields through war, you cannot acquire knowledge that way. Hence as knowledge became the most important economic resource, the profitability of war declined and wars became increasingly restricted to those parts of the world – such as the Middle East and Central Africa – where the economies are still old-fashioned material-based economies.
In 1998 it made sense for Rwanda to seize and loot the rich coltan mines of neighbouring Congo, because this ore was in high demand for the manufacture of mobile phones and laptops, and Congo held 80 per cent of the world’s coltan reserves. Rwanda earned $240 million annually from the looted coltan. For poor Rwanda that was a lot of money.24 In contrast, it would have made no sense for China to invade California and seize Silicon Valley, for even if the Chinese could somehow prevail on the battlefield, there were no silicon mines to loot in Silicon Valley. Instead, the Chinese have earned billions of dollars from cooperating with hi-tech giants such as Apple and Microsoft, buying their software and manufacturing their products. What Rwanda earned from an entire year of looting Congolese coltan, the Chinese earn in a single day of peaceful commerce.
In consequence, the word ‘peace’ has acquired a new meaning. Previous generations thought about peace as the temporary absence of war. Today we think about peace as the implausibility of war. When in 1913 people said that there was peace between France and Germany, they meant that ‘there is no war going on at present between France and Germany, but who knows what next year will bring’. When today we say that there is peace between France and Germany, we mean that it is inconceivable under any foreseeable circumstances that war might break out between them. Such peace prevails not only between France and Germany, but between most (though not all) countries. There is no scenario for a serious war breaking out next year between Germany and Poland, between Indonesia and the Philippines, or between Brazil and Uruguay.
This New Peace is not just a hippie fantasy. Power-hungry governments and greedy corporations also count on it. When Mercedes plans its sales strategy in eastern Europe, it discounts the possibility that Germany might conquer Poland. A corporation importing cheap labourers from the Philippines is not worried that Indonesia might invade the Philippines next year. When the Brazilian government convenes to discuss next year’s budget, it’s unimaginable that the Brazilian defence minister will rise from his seat, bang his fist on the table and shout, ‘Just a minute! What if we want to invade and conquer Uruguay? You didn’t take that into account. We have to put aside $5 billion to finance this conquest.’ Of course, there are a few places where defence ministers still say such things, and there are regions where the New Peace has failed to take root. I know this very well because I live in one of these regions. But these are exceptions.
There is no guarantee, of course, that the New Peace will hold indefinitely. Just as nuclear weapons made the New Peace possible in the first place, so future technological developments might set the stage for new kinds of war. In particular, cyber warfare may destabilise the world by giving even small countries and non-state actors the ability to fight superpowers effectively. When the USA fought Iraq in 2003 it brought havoc to Baghdad and Mosul, but not a single bomb was dropped on Los Angeles or Chicago. In the future, though, a country such as North Korea or Iran could use logic bombs to shut down the power in California, blow up refineries in Texas and cause trains to collide in Michigan (‘logic bombs’ are malicious software codes planted in peacetime and operated at a distance. It is highly likely that networks controlling vital infrastructure facilities in the USA and many other countries are already crammed with such codes).
However, we should not confuse ability with motivation. Though cyber warfare introduces new means of destruction, it doesn’t necessarily add new incentives to use them. Over the last seventy years humankind has broken not only the Law of the Jungle, but also the Chekhov Law. Anton Chekhov famously said that a gun appearing in the first act of a play will inevitably be fired in the third. Throughout history, if kings and emperors acquired some new weapon, sooner or later they were tempted to use it. Since 1945, however, humankind has learned to resist this temptation. The gun that appeared in the first act of the Cold War was never fired. By now we are accustomed to living in a world full of undropped bombs and unlaunched missiles, and have become experts in breaking both the Law of the Jungle and the Chekhov Law. If these laws ever do catch up with us, it will be our own fault – not our inescapable destiny.
4. Nuclear missiles on parade in Moscow. The gun that was always on display but never fired.
4. Moscow, 1968 © Sovfoto/UIG via Getty Images.
What about terrorism, then? Even if central governments and powerful states have learned restraint, terrorists might have no such qualms about using new and destructive weapons. That is certainly a worrying possibility. However, terrorism is a strategy of weakness adopted by those who lack access to real power. At least in the past, terrorism worked by spreading fear rather than by causing significant material damage. Terrorists usually don’t have the strength to defeat an army, occupy a country or destroy entire cities. Whereas in 2010 obesity and related illnesses killed about 3 million people, terrorists killed a total of 7,697 people across the globe, most of them in developing countries.25 For the average American or European, Coca-Cola poses a far deadlier threat than al-Qaeda.
How, then, do terrorists manage to dominate the headlines and change the political situation throughout the world? By provoking their enemies to overreact. In essence, terrorism is a show. Terrorists stage a terrifying spectacle of violence that captures our imagination and makes us feel as if we are sliding back into medieval chaos. Consequently states often feel obliged to react to the theatre of terrorism with a show of security, orchestrating immense displays of force, such as the persecution of entire populations or the invasion of foreign countries. In most cases, this overreaction to terrorism poses a far greater threat to our security than the terrorists themselves.
Terrorists are like a fly that tries to destroy a china shop. The fly is so weak that it cannot budge even a single teacup. So it finds a bull, gets inside its ear and starts buzzing. The bull goes wild with fear and anger, and destroys the china shop. This is what happened in the Middle East in the last decade. Islamic fundamentalists could never have toppled Saddam Hussein by themselves. Instead they enraged the USA by the 9/11 attacks, and the USA destroyed the Middle Eastern china shop for them. Now they flourish in the wreckage. By themselves, terrorists are too weak to drag us back to the Middle Ages and re-establish the Jungle Law. They may provoke us, but in the end, it all depends on our reactions. If the Jungle Law comes back into force, it will not be the fault of terrorists.
Famine, plague and war will probably continue to claim millions of victims in the coming decades. Yet they are no longer unavoidable tragedies beyond the understanding and control of a helpless humanity. Instead, they have become manageable challenges. This does not belittle the suffering of hundreds of millions of poverty-stricken humans; of the millions felled each year by malaria, AIDS and tuberculosis; or of the millions trapped in violent vicious circles in Syria, the Congo or Afghanistan. The message is not that famine, plague and war have completely disappeared from the face of the earth, and that we should stop worrying about them. Just the opposite. Throughout history people felt these were unsolvable problems, so there was no point trying to put an end to them. People prayed to God for miracles, but they themselves did not seriously attempt to exterminate famine, plague and war. Those arguing that the world of 2016 is as hungry, sick and violent as it was in 1916 perpetuate this age-old defeatist view. They imply that all the huge efforts humans have made during the twentieth century have achieved nothing, and that medical research, economic reforms and peace initiatives have all been in vain. If so, what is the point of investing our time and resources in further medical research, novel economic reforms or new peace initiatives?
Acknowledging our past achievements sends a message of hope and responsibility, encouraging us to make even greater efforts in the future. Given our twentieth-century accomplishments, if people continue to suffer from famine, plague and war, we cannot blame it on nature or on God. It is within our power to make things better and to reduce the incidence of suffering even further.
Yet appreciating the magnitude of our achievements carries another message: history does not tolerate a vacuum. If incidences of famine, plague and war are decreasing, something is bound to take their place on the human agenda. We had better think very carefully about what it is going to be. Otherwise, we might gain complete victory in the old battlefields only to be caught completely unaware on entirely new fronts. What are the projects that will replace famine, plague and war at the top of the human agenda in the twenty-first century?
One central project will be to protect humankind and the planet as a whole from the dangers inherent in our own power. We have managed to bring famine, plague and war under control thanks largely to our phenomenal economic growth, which provides us with abundant food, medicine, energy and raw materials. Yet this same growth destabilises the ecological equilibrium of the planet in myriad ways, which we have only begun to explore. Humankind has been late in acknowledging this danger, and has so far done very little about it. Despite all the talk of pollution, global warming and climate change, most countries have yet to make any serious economic or political sacrifices to improve the situation. When the moment comes to choose between economic growth and ecological stability, politicians, CEOs and voters almost always prefer growth. In the twenty-first century, we shall have to do better if we are to avoid catastrophe.
What else will humanity strive for? Would we be content merely to count our blessings, keep famine, plague and war at bay, and protect the ecological equilibrium? That might indeed be the wisest course of action, but humankind is unlikely to follow it. Humans are rarely satisfied with what they already have. The most common reaction of the human mind to achievement is not satisfaction, but craving for more. Humans are always on the lookout for something better, bigger, tastier. When humankind possesses enormous new powers, and when the threat of famine, plague and war is finally lifted, what will we do with ourselves? What will the scientists, investors, bankers and presidents do all day? Write poetry?
Success breeds ambition, and our recent achievements are now pushing humankind to set itself even more daring goals. Having secured unprecedented levels of prosperity, health and harmony, and given our past record and our current values, humanity’s next targets are likely to be immortality, happiness and divinity. Having reduced mortality from starvation, disease and violence, we will now aim to overcome old age and even death itself. Having saved people from abject misery, we will now aim to make them positively happy. And having raised humanity above the beastly level of survival struggles, we will now aim to upgrade humans into gods, and turn Homo sapiens into Homo deus.
In the twenty-first century humans are likely to make a serious bid for immortality. Struggling against old age and death will merely carry on the time-honoured fight against famine and disease, and manifest the supreme value of contemporary culture: the worth of human life. We are constantly reminded that human life is the most sacred thing in the universe. Everybody says this: teachers in schools, politicians in parliaments, lawyers in courts and actors on theatre stages. The Universal Declaration of Human Rights adopted by the UN after the Second World War – which is perhaps the closest thing we have to a global constitution – categorically states that ‘the right to life’ is humanity’s most fundamental value. Since death clearly violates this right, death is a crime against humanity, and we ought to wage total war against it.
Throughout history, religions and ideologies did not sanctify life itself. They always sanctified something above or beyond earthly existence, and were consequently quite tolerant of death. Indeed, some of them have been downright fond of the Grim Reaper. Because Christianity, Islam and Hinduism insisted that the meaning of our existence depended on our fate in the afterlife, they viewed death as a vital and positive part of the world. Humans died because God decreed it, and their moment of death was a sacred metaphysical experience exploding with meaning. When a human was about to breathe his last, this was the time to call priests, rabbis and shamans, to draw out the balance of life, and to embrace one’s true role in the universe. Just try to imagine Christianity, Islam or Hinduism in a world without death – which is also a world without heaven, hell or reincarnation.
Modern science and modern culture have an entirely different take on life and death. They don’t think of death as a metaphysical mystery, and they certainly don’t view death as the source of life’s meaning. Rather, for modern people death is a technical problem that we can and should solve.
How exactly do humans die? Medieval fairy tales depicted Death as a figure in a hooded black cloak, his hand gripping a large scythe. A man lives his life, worrying about this and that, running here and there, when suddenly the Grim Reaper appears before him, taps him on the shoulder with a bony finger and says, ‘Come!’ And the man implores: ‘No, please! Wait just a year, a month, a day!’ But the hooded figure hisses: ‘No! You must come NOW!’ And this is how we die.
In reality, however, humans don’t die because a figure in a black cloak taps them on the shoulder, or because God decreed it, or because mortality is an essential part of some great cosmic plan. Humans always die due to some technical glitch. The heart stops pumping blood. The main artery is clogged by fatty deposits. Cancerous cells spread in the liver. Germs multiply in the lungs. And what is responsible for all these technical problems? Other technical problems. The heart stops pumping blood because not enough oxygen reaches the heart muscle. Cancerous cells spread because a chance genetic mutation rewrote their instructions. Germs settled in my lungs because somebody sneezed on the subway. Nothing metaphysical about it. It is all technical problems.
5. Death personified as the Grim Reaper in medieval art.
5. ‘Death and dying’ from 14th-century French manuscript: Pilgrimage of the Human Life, Bodleian Library, Oxford © Art Media/Print Collector/Getty Images.
And every technical problem has a technical solution. We don’t need to wait for the Second Coming in order to overcome death. A couple of geeks in a lab can do it. If traditionally death was the speciality of priests and theologians, now the engineers are taking over. We can kill the cancerous cells with chemotherapy or nanorobots. We can exterminate the germs in the lungs with antibiotics. If the heart stops pumping, we can reinvigorate it with medicines and electric shocks – and if that doesn’t work, we can implant a new heart. True, at present we don’t have solutions to all technical problems. But this is precisely why we invest so much time and money in researching cancer, germs, genetics and nanotechnology.
Even ordinary people, who are not engaged in scientific research, have become used to thinking about death as a technical problem. When a woman goes to her physician and asks, ‘Doctor, what’s wrong with me?’ the doctor is likely to say, ‘Well, you have the flu,’ or ‘You have tuberculosis,’ or ‘You have cancer.’ But the doctor will never say, ‘You have death.’ And we are all under the impression that flu, tuberculosis and cancer are technical problems, to which we might someday find a technical solution.
Even when people die in a hurricane, a car accident or a war, we tend to view it as a technical failure that could and should have been prevented. If the government had only adopted a better policy; if the municipality had done its job properly; and if the military commander had taken a wiser decision, death would have been avoided. Death has become an almost automatic reason for lawsuits and investigations. ‘How could they have died? Somebody somewhere must have screwed up.’
The vast majority of scientists, doctors and scholars still distance themselves from outright dreams of immortality, claiming that they are trying to overcome only this or that particular problem. Yet because old age and death are the outcome of nothing but particular problems, there is no point at which doctors and scientists are going to stop and declare: ‘Thus far, and not another step. We have overcome tuberculosis and cancer, but we won’t lift a finger to fight Alzheimer’s. People can go on dying from that.’ The Universal Declaration of Human Rights does not say that humans have ‘the right to life until the age of ninety’. It says that every human has a right to life, period. That right isn’t limited by any expiry date.
An increasing minority of scientists and thinkers consequently speak more openly these days, and state that the flagship enterprise of modern science is to defeat death and grant humans eternal youth. Notable examples are the gerontologist Aubrey de Grey and the polymath and inventor Ray Kurzweil (winner of the 1999 US National Medal of Technology and Innovation). In 2012 Kurzweil was appointed a director of engineering at Google, and a year later Google launched a sub-company called Calico whose stated mission is ‘to solve death’.26 In 2009 Google appointed another immortality true-believer, Bill Maris, to preside over the Google Ventures investment fund. In a January 2015 interview, Maris said, ‘If you ask me today, is it possible to live to be 500, the answer is yes.’ Maris backs up his brave words with a lot of hard cash. Google Ventures is investing 36 per cent of its $2 billion portfolio in life sciences start-ups, including several ambitious life-extending projects. Using an American football analogy, Maris explained that in the fight against death, ‘We aren’t trying to gain a few yards. We are trying to win the game.’ Why? Because, says Maris, ‘it is better to live than to die’.27
Such dreams are shared by other Silicon Valley luminaries. PayPal co-founder Peter Thiel has recently confessed that he aims to live for ever. ‘I think there are probably three main modes of approaching [death],’ he explained. ‘You can accept it, you can deny it or you can fight it. I think our society is dominated by people who are into denial or acceptance, and I prefer to fight it.’ Many people are likely to dismiss such statements as teenage fantasies. Yet Thiel is somebody to be taken very seriously. He is one of the most successful and influential entrepreneurs in Silicon Valley with a private fortune estimated at $2.2 billion.28 The writing is on the wall: equality is out – immortality is in.
The breakneck development of fields such as genetic engineering, regenerative medicine and nanotechnology fosters ever more optimistic prophecies. Some experts believe that humans will overcome death by 2200, others say 2100. Kurzweil and de Grey are even more sanguine. They maintain that anyone possessing a healthy body and a healthy bank account in 2050 will have a serious shot at immortality by cheating death a decade at a time. According to Kurzweil and de Grey, every ten years or so we will march into the clinic and receive a makeover treatment that will not only cure illnesses, but will also regenerate decaying tissues, and upgrade hands, eyes and brains. Before the next treatment is due, doctors will have invented a plethora of new medicines, upgrades and gadgets. If Kurzweil and de Grey are right, there may already be some immortals walking next to you on the street – at least if you happen to be walking down Wall Street or Fifth Avenue.
In truth they will actually be a-mortal, rather than immortal. Unlike God, future superhumans could still die in some war or accident, and nothing could bring them back from the netherworld. However, unlike us mortals, their life would have no expiry date. So long as no bomb shreds them to pieces or no truck runs them over, they could go on living indefinitely. Which will probably make them the most anxious people in history. We mortals daily take chances with our lives, because we know they are going to end anyhow. So we go on treks in the Himalayas, swim in the sea, and do many other dangerous things like crossing the street or eating out. But if you believe you can live for ever, you would be crazy to gamble on infinity like that.
Perhaps, then, we had better start with more modest aims, such as doubling life expectancy? In the twentieth century we have almost doubled life expectancy from forty to seventy, so in the twenty-first century we should at least be able to double it again to 150. Though falling far short of immortality, this would still revolutionise human society. For starters, family structure, marriages and child–parent relationships would be transformed. Today, people still expect to be married ‘till death us do part’, and much of life revolves around having and raising children. Now try to imagine a person with a lifespan of 150 years. Getting married at forty, she still has 110 years to go. Will it be realistic to expect her marriage to last 110 years? Even Catholic fundamentalists might baulk at that. So the current trend of serial marriages is likely to intensify. Bearing two children in her forties, she will, by the time she is 120, have only a distant memory of the years she spent raising them – a rather minor episode in her long life. It’s hard to tell what kind of new parent–child relationship might develop under such circumstances.
Or consider professional careers. Today we assume that you learn a profession in your teens and twenties, and then spend the rest of your life in that line of work. You obviously learn new things even in your forties and fifties, but life is generally divided into a learning period followed by a working period. When you live to be 150 that won’t do, especially in a world that is constantly being shaken by new technologies. People will have much longer careers, and will have to reinvent themselves again and again even at the age of ninety.
At the same time, people will not retire at sixty-five and will not make way for the new generation with its novel ideas and aspirations. The physicist Max Planck famously said that science advances one funeral at a time. He meant that only when one generation passes away do new theories have a chance to root out old ones. This is true not only of science. Think for a moment about your own workplace. No matter whether you are a scholar, journalist, cook or football player, how would you feel if your boss were 120, his ideas were formulated when Victoria was still queen, and he was likely to stay your boss for a couple of decades more?
In the political sphere the results might be even more sinister. Would you mind having Putin stick around for another ninety years? On second thought, if people lived to 150, then in 2016 Stalin would still be ruling in Moscow, going strong at 138, Chairman Mao would be a middle-aged 123-year-old, and Princess Elizabeth would be sitting on her hands waiting to inherit from the 121-year-old George VI. Her son Charles would not get his turn until 2076.
Coming back to the realm of reality, it is far from certain whether Kurzweil’s and de Grey’s prophecies will come true by 2050 or 2100. My own view is that the hopes of eternal youth in the twenty-first century are premature, and whoever takes them too seriously is in for a bitter disappointment. It is not easy to live knowing that you are going to die, but it is even harder to believe in immortality and be proven wrong.
Although average life expectancy has doubled over the last hundred years, it is unwarranted to extrapolate and conclude that we can double it again to 150 in the coming century. In 1900 global life expectancy was no higher than forty because many people died young from malnutrition, infectious diseases and violence. Yet those who escaped famine, plague and war could live well into their seventies and eighties, which is the natural life span of Homo sapiens. Contrary to common notions, seventy-year-olds weren’t considered rare freaks of nature in previous centuries. Galileo Galilei died at seventy-seven, Isaac Newton at eighty-four, and Michelangelo lived to the ripe age of eighty-eight, without any help from antibiotics, vaccinations or organ transplants. Indeed, even chimpanzees in the jungle sometimes live into their sixties.29
In truth, so far modern medicine hasn’t extended our natural life span by a single year. Its great achievement has been to save us from premature death, and allow us to enjoy the full measure of our years. Even if we now overcome cancer, diabetes and the other major killers, it would mean only that almost everyone will get to live to ninety – but it will not be enough to reach 150, let alone 500. For that, medicine will need to re-engineer the most fundamental structures and processes of the human body, and discover how to regenerate organs and tissues. It is by no means clear that we can do that by 2100.
Nevertheless, every failed attempt to overcome death will get us a step closer to the target, and that will inspire greater hopes and encourage people to make even greater efforts. Though Google’s Calico probably won’t solve death in time to make Google co-founders Sergey Brin and Larry Page immortal, it will most probably make significant discoveries about cell biology, genetic medicines and human health. The next generation of Googlers could therefore start their attack on death from new and better positions. The scientists who cry immortality are like the boy who cried wolf: sooner or later, the wolf actually comes.
Hence even if we don’t achieve immortality in our lifetime, the war against death is still likely to be the flagship project of the coming century. When you take into account our belief in the sanctity of human life, add the dynamics of the scientific establishment, and top it all with the needs of the capitalist economy, a relentless war against death seems to be inevitable. Our ideological commitment to human life will never allow us simply to accept human death. As long as people die of something, we will strive to overcome it.
The scientific establishment and the capitalist economy will be more than happy to underwrite this struggle. Most scientists and bankers don’t care what they are working on, provided it gives them an opportunity to make new discoveries and greater profits. Can anyone imagine a more exciting scientific challenge than outsmarting death – or a more promising market than the market of eternal youth? If you are over forty, close your eyes for a minute and try to remember the body you had at twenty-five. Not only how it looked, but above all how it felt. If you could have that body back, how much would you be willing to pay for it? No doubt some people would be happy to forgo the opportunity, but enough customers would pay whatever it takes, constituting a well-nigh infinite market.
If all that is not enough, the fear of death ingrained in most humans will give the war against death an irresistible momentum. As long as people assumed that death is inevitable, they trained themselves from an early age to suppress the desire to live for ever, or harnessed it in favour of substitute goals. People want to live for ever, so they compose an ‘immortal’ symphony, they strive for ‘eternal glory’ in some war, or even sacrifice their lives so that their souls will ‘enjoy everlasting bliss in paradise’. A large part of our artistic creativity, our political commitment and our religious piety is fuelled by the fear of death.
Woody Allen, who has made a fabulous career out of the fear of death, was once asked if he hoped to live on for ever on the silver screen. Allen answered that ‘I’d rather live on in my apartment.’ He went on to add that ‘I don’t want to achieve immortality through my work. I want to achieve it by not dying.’ Eternal glory, nationalist remembrance ceremonies and dreams of paradise are very poor substitutes for what humans like Allen really want – not to die. Once people think (with or without good reason) that they have a serious chance of escaping death, the desire for life will refuse to go on pulling the rickety wagon of art, ideology and religion, and will sweep forward like an avalanche.
If you think that religious fanatics with burning eyes and flowing beards are ruthless, just wait and see what elderly retail moguls and ageing Hollywood starlets will do when they think the elixir of life is within reach. If and when science makes significant progress in the war against death, the real battle will shift from the laboratories to the parliaments, courthouses and streets. Once the scientific efforts are crowned with success, they will trigger bitter political conflicts. All the wars and conflicts of history might turn out to be but a pale prelude for the real struggle ahead of us: the struggle for eternal youth.
The second big project on the human agenda will probably be to find the key to happiness. Throughout history numerous thinkers, prophets and ordinary people defined happiness rather than life itself as the supreme good. In ancient Greece the philosopher Epicurus explained that worshipping gods is a waste of time, that there is no existence after death, and that happiness is the sole purpose of life. Most people in ancient times rejected Epicureanism, but today it has become the default view. Scepticism about the afterlife drives humankind to seek not only immortality, but also earthly happiness. For who would like to live for ever in eternal misery?
For Epicurus the pursuit of happiness was a personal quest. Modern thinkers, in contrast, tend to see it as a collective project. Without government planning, economic resources and scientific research, individuals will not get far in their quest for happiness. If your country is torn apart by war, if the economy is in crisis and if health care is non-existent, you are likely to be miserable. At the end of the eighteenth century the British philosopher Jeremy Bentham declared that the supreme good is ‘the greatest happiness of the greatest number’, and concluded that the sole worthy aim of the state, the market and the scientific community is to increase global happiness. Politicians should make peace, business people should foster prosperity and scholars should study nature, not for the greater glory of king, country or God – but so that you and I could enjoy a happier life.
During the nineteenth and twentieth centuries, although many paid lip service to Bentham’s vision, governments, corporations and laboratories focused on more immediate and well-defined aims. Countries measured their success by the size of their territory, the increase in their population and the growth of their GDP – not by the happiness of their citizens. Industrialised nations such as Germany, France and Japan established gigantic systems of education, health and welfare, yet these systems were aimed to strengthen the nation rather than ensure individual well-being.
Schools were founded to produce skilful and obedient citizens who would serve the nation loyally. At eighteen, youths needed to be not only patriotic but also literate, so that they could read the brigadier’s order of the day and draw up tomorrow’s battle plans. They had to know mathematics in order to calculate the shell’s trajectory or crack the enemy’s secret code. They needed a reasonable command of electrics, mechanics and medicine in order to operate wireless sets, drive tanks and take care of wounded comrades. When they left the army they were expected to serve the nation as clerks, teachers and engineers, building a modern economy and paying lots of taxes.
The same went for the health system. At the end of the nineteenth century countries such as France, Germany and Japan began providing free health care for the masses. They financed vaccinations for infants, balanced diets for children and physical education for teenagers. They drained festering swamps, exterminated mosquitoes and built centralised sewage systems. The aim wasn’t to make people happy, but to make the nation stronger. The country needed sturdy soldiers and workers, healthy women who would give birth to more soldiers and workers, and bureaucrats who came to the office punctually at 8 a.m. instead of lying sick at home.
Even the welfare system was originally planned in the interest of the nation rather than of needy individuals. When Otto von Bismarck pioneered state pensions and social security in late nineteenth-century Germany, his chief aim was to ensure the loyalty of the citizens rather than to increase their well-being. You fought for your country when you were eighteen, and paid your taxes when you were forty, because you counted on the state to take care of you when you were seventy.30
In 1776 the Founding Fathers of the United States established the right to the pursuit of happiness as one of three unalienable human rights, alongside the right to life and the right to liberty. It’s important to note, however, that the American Declaration of Independence guaranteed the right to the pursuit of happiness, not the right to happiness itself. Crucially, Thomas Jefferson did not make the state responsible for its citizens’ happiness. Rather, he sought only to limit the power of the state. The idea was to reserve for individuals a private sphere of choice, free from state supervision. If I think I’ll be happier marrying John rather than Mary, living in San Francisco rather than Salt Lake City, and working as a bartender rather than a dairy farmer, then it’s my right to pursue happiness my way, and the state shouldn’t intervene even if I make the wrong choice.
Yet over the last few decades the tables have turned, and Bentham’s vision has been taken far more seriously. People increasingly believe that the immense systems established more than a century ago to strengthen the nation should actually serve the happiness and well-being of individual citizens. We are not here to serve the state – it is here to serve us. The right to the pursuit of happiness, originally envisaged as a restraint on state power, has imperceptibly morphed into the right to happiness – as if human beings have a natural right to be happy, and anything which makes us dissatisfied is a violation of our basic human rights, so the state should do something about it.
In the twentieth century per capita GDP was perhaps the supreme yardstick for evaluating national success. From this perspective, Singapore, each of whose citizens produces on average $56,000 worth of goods and services a year, is a more successful country than Costa Rica, whose citizens produce only $14,000 a year. But nowadays thinkers, politicians and even economists are calling to supplement or even replace GDP with GDH – gross domestic happiness. After all, what do people want? They don’t want to produce. They want to be happy. Production is important because it provides the material basis for happiness. But it is only the means, not the end. In one survey after another Costa Ricans report far higher levels of life satisfaction than Singaporeans. Would you rather be a highly productive but dissatisfied Singaporean, or a less productive but satisfied Costa Rican?
This kind of logic might drive humankind to make happiness its second main goal for the twenty-first century. At first glance this might seem a relatively easy project. If famine, plague and war are disappearing, if humankind experiences unprecedented peace and prosperity, and if life expectancy increases dramatically, surely all that will make humans happy, right?
Wrong. When Epicurus defined happiness as the supreme good, he warned his disciples that it is hard work to be happy. Material achievements alone will not satisfy us for long. Indeed, the blind pursuit of money, fame and pleasure will only make us miserable. Epicurus recommended, for example, to eat and drink in moderation, and to curb one’s sexual appetites. In the long run, a deep friendship will make us more content than a frenzied orgy. Epicurus outlined an entire ethic of dos and don’ts to guide people along the treacherous path to happiness.
Epicurus was apparently on to something. Being happy doesn’t come easy. Despite our unprecedented achievements in the last few decades, it is far from obvious that contemporary people are significantly more satisfied than their ancestors in bygone years. Indeed, it is an ominous sign that despite higher prosperity, comfort and security, the rate of suicide in the developed world is also much higher than in traditional societies.
In Peru, Guatemala, the Philippines and Albania – developing countries suffering from poverty and political instability – about one person in 100,000 commits suicide each year. In rich and peaceful countries such as Switzerland, France, Japan and New Zealand, twenty-five people per 100,000 take their own lives annually. In 1985 most South Koreans were poor, uneducated and tradition-bound, living under an authoritarian dictatorship. Today South Korea is a leading economic power, its citizens are among the best educated in the world, and it enjoys a stable and comparatively liberal democratic regime. Yet whereas in 1985 about nine South Koreans per 100,000 killed themselves, today the annual rate of suicide has more than tripled to thirty per 100,000.31
There are of course opposite and far more encouraging trends. Thus the drastic decrease in child mortality has surely brought an increase in human happiness, and partially compensated people for the stress of modern life. Still, even if we are somewhat happier than our ancestors, the increase in our well-being is far less than we might have expected. In the Stone Age, the average human had at his or her disposal about 4,000 calories of energy per day. This included not only food, but also the energy invested in preparing tools, clothing, art and campfires. Today Americans use on average 228,000 calories of energy per person per day, to feed not only their stomachs but also their cars, computers, refrigerators and televisions.32 The average American thus uses sixty times more energy than the average Stone Age hunter-gatherer. Is the average American sixty times happier? We may well be sceptical about such rosy views.
And even if we have overcome many of yesterday’s miseries, attaining positive happiness may be far more difficult than abolishing downright suffering. It took just a piece of bread to make a starving medieval peasant joyful. How do you bring joy to a bored, overpaid and overweight engineer? The second half of the twentieth century was a golden age for the USA. Victory in the Second World War, followed by an even more decisive victory in the Cold War, turned it into the leading global superpower. Between 1950 and 2000 American GDP grew from $2 trillion to $12 trillion. Real per capita income doubled. The newly invented contraceptive pill made sex freer than ever. Women, gays, African Americans and other minorities finally got a bigger slice of the American pie. A flood of cheap cars, refrigerators, air conditioners, vacuum cleaners, dishwashers, laundry machines, telephones, televisions and computers changed daily life almost beyond recognition. Yet studies have shown that American subjective well-being levels in the 1990s remained roughly the same as they were in the 1950s.33
In Japan, average real income rose by a factor of five between 1958 and 1987, in one of the fastest economic booms of history. This avalanche of wealth, coupled with myriad positive and negative changes in Japanese lifestyles and social relations, had surprisingly little impact on Japanese subjective well-being levels. The Japanese in the 1990s were as satisfied – or dissatisfied – as they were in the 1950s.34
It appears that our happiness bangs against some mysterious glass ceiling that does not allow it to grow despite all our unprecedented accomplishments. Even if we provide free food for everybody, cure all diseases and ensure world peace, it won’t necessarily shatter that glass ceiling. Achieving real happiness is not going to be much easier than overcoming old age and death.
The glass ceiling of happiness is held in place by two stout pillars, one psychological, the other biological. On the psychological level, happiness depends on expectations rather than objective conditions. We don’t become satisfied by leading a peaceful and prosperous existence. Rather, we become satisfied when reality matches our expectations. The bad news is that as conditions improve, expectations balloon. Dramatic improvements in conditions, as humankind has experienced in recent decades, translate into greater expectations rather than greater contentment. If we don’t do something about this, our future achievements too might leave us as dissatisfied as ever.
On the biological level, both our expectations and our happiness are determined by our biochemistry, rather than by our economic, social or political situation. According to Epicurus, we are happy when we feel pleasant sensations and are free from unpleasant ones. Jeremy Bentham similarly maintained that nature gave dominion over man to two masters – pleasure and pain – and they alone determine everything we do, say and think. Bentham’s successor, John Stuart Mill, explained that happiness is nothing but pleasure and freedom from pain, and that beyond pleasure and pain there is no good and no evil. Anyone who tries to deduce good and evil from something else (such as the word of God, or the national interest) is fooling you, and perhaps fooling himself too.35
In the days of Epicurus such talk was blasphemous. In the days of Bentham and Mill it was radical subversion. But in the early twenty-first century this is scientific orthodoxy. According to the life sciences, happiness and suffering are nothing but different balances of bodily sensations. We never react to events in the outside world, but only to sensations in our own bodies. Nobody suffers because she lost her job, because she got divorced or because the government went to war. The only thing that makes people miserable is unpleasant sensations in their own bodies. Losing one’s job can certainly trigger depression, but depression itself is a kind of unpleasant bodily sensation. A thousand things may make us angry, but anger is never an abstraction. It is always felt as a sensation of heat and tension in the body, which is what makes anger so infuriating. Not for nothing do we say that we ‘burn’ with anger.
Conversely, science says that nobody is ever made happy by getting a promotion, winning the lottery or even finding true love. People are made happy by one thing and one thing only – pleasant sensations in their bodies. Imagine that you are Mario Götze, the attacking midfielder of the German football team in the 2014 World Cup Final against Argentina; 113 minutes have already elapsed, without a goal being scored. Only seven minutes remain before the dreaded penalty shoot-out. Some 75,000 excited fans fill the Maracanã stadium in Rio, with countless millions anxiously watching all over the world. You are a few yards from the Argentinian goal when André Schürrle sends a magnificent pass in your direction. You stop the ball with your chest, it drops down towards your leg, you give it a kick in mid-air, and you see it fly past the Argentinian goalkeeper and bury itself deep inside the net. Goooooooal! The stadium erupts like a volcano. Tens of thousands of people roar like mad, your teammates are racing to hug and kiss you, millions of people back home in Berlin and Munich collapse in tears before the television screen. You are ecstatic, but not because of the ball in the Argentinian net or the celebrations going on in crammed Bavarian Biergartens. You are actually reacting to the storm of sensations within you. Chills run up and down your spine, waves of electricity wash over your body, and it feels as if you are dissolving into millions of exploding energy balls.
You don’t have to score the winning goal in the World Cup Final to feel such sensations. If you receive an unexpected promotion at work, and start jumping for joy, you are reacting to the same kind of sensations. The deeper parts of your mind know nothing about football or about jobs. They know only sensations. If you get a promotion, but for some reason don’t feel any pleasant sensations – you will not feel satisfied. The opposite is also true. If you have just been fired (or lost a decisive football match), but you are experiencing very pleasant sensations (perhaps because you popped some pill), you might still feel on top of the world.
The bad news is that pleasant sensations quickly subside and sooner or later turn into unpleasant ones. Even scoring the winning goal in the World Cup Final doesn’t guarantee lifelong bliss. In fact, it might all be downhill from there. Similarly, if last year I received an unexpected promotion at work, I might still be occupying that new position, but the very pleasant sensations I experienced on hearing the news disappeared within hours. If I want to feel those wonderful sensations again, I must get another promotion. And another. And if I don’t get a promotion, I might end up far more bitter and angry than if I had remained a humble pawn.
This is all the fault of evolution. For countless generations our biochemical system adapted to increasing our chances of survival and reproduction, not our happiness. The biochemical system rewards actions conducive to survival and reproduction with pleasant sensations. But these are only an ephemeral sales gimmick. We struggle to get food and mates in order to avoid unpleasant sensations of hunger and to enjoy pleasing tastes and blissful orgasms. But nice tastes and blissful orgasms don’t last very long, and if we want to feel them again we have to go out looking for more food and mates.
What might have happened if a rare mutation had created a squirrel who, after eating a single nut, enjoys an everlasting sensation of bliss? Technically, this could actually be done by rewiring the squirrel’s brain. Who knows, perhaps it really happened to some lucky squirrel millions of years ago. But if so, that squirrel enjoyed an extremely happy and extremely short life, and that was the end of the rare mutation. For the blissful squirrel would not have bothered to look for more nuts, let alone mates. The rival squirrels, who felt hungry again five minutes after eating a nut, had much better chances of surviving and passing their genes to the next generation. For exactly the same reason, the nuts we humans seek to gather – lucrative jobs, big houses, good-looking partners – seldom satisfy us for long.
Some may say that this is not so bad, because it isn’t the goal that makes us happy – it’s the journey. Climbing Mount Everest is more satisfying than standing at the top; flirting and foreplay are more exciting than having an orgasm; and conducting groundbreaking lab experiments is more interesting than receiving praise and prizes. Yet this hardly changes the picture. It just indicates that evolution controls us with a broad range of pleasures. Sometimes it seduces us with sensations of bliss and tranquillity, while on other occasions it goads us forward with thrilling sensations of elation and excitement.
When an animal is looking for something that increases its chances of survival and reproduction (e.g., food, partners or social status), the brain produces sensations of alertness and excitement, which drive the animal to make even greater efforts because they are so very agreeable. In a famous experiment scientists connected electrodes to the brains of several rats, enabling the animals to create sensations of excitement simply by pressing a pedal. When the rats were given a choice between tasty food and pressing the pedal, they preferred the pedal (much like kids preferring to play video games rather than come down to dinner). The rats pressed the pedal again and again, until they collapsed from hunger and exhaustion.36 Humans too may prefer the excitement of the race to resting on the laurels of success. Yet what makes the race so attractive is the exhilarating sensations that go along with it. Nobody would have wanted to climb mountains, play video games or go on blind dates if such activities were accompanied solely by unpleasant sensations of stress, despair or boredom.37
Alas, the exciting sensations of the race are as transient as the blissful sensations of victory. The Don Juan enjoying the thrill of a one-night stand, the businessman enjoying biting his fingernails watching the Dow Jones rise and fall, and the gamer enjoying killing monsters on the computer screen will find no satisfaction remembering yesterday’s adventures. Like the rats pressing the pedal again and again, the Don Juans, business tycoons and gamers need a new kick every day. Worse still, here too expectations adapt to conditions, and yesterday’s challenges all too quickly become today’s tedium. Perhaps the key to happiness is neither the race nor the gold medal, but rather combining the right doses of excitement and tranquillity; but most of us tend to jump all the way from stress to boredom and back, remaining as discontented with one as with the other.
If science is right and our happiness is determined by our biochemical system, then the only way to ensure lasting contentment is by rigging this system. Forget economic growth, social reforms and political revolutions: in order to raise global happiness levels, we need to manipulate human biochemistry. And this is exactly what we have begun doing over the last few decades. Fifty years ago psychiatric drugs carried a severe stigma. Today, that stigma has been broken. For better or worse, a growing percentage of the population is taking psychiatric medicines on a regular basis, not only to cure debilitating mental illnesses, but also to face more mundane depressions and the occasional blues.
For example, increasing numbers of schoolchildren take stimulants such as Ritalin. In 2011, 3.5 million American children were taking medications for ADHD (attention deficit hyperactivity disorder). In the UK the number rose from 92,000 in 1997 to 786,000 in 2012.38 The original aim had been to treat attention disorders, but today completely healthy kids take such medications to improve their performance and live up to the growing expectations of teachers and parents.39 Many object to this development and argue that the problem lies with the education system rather than with the children. If pupils suffer from attention disorders, stress and low grades, perhaps we ought to blame outdated teaching methods, overcrowded classrooms and an unnaturally fast tempo of life. Maybe we should modify the schools rather than the kids? It is interesting to see how the arguments have evolved. People have been quarrelling about education methods for thousands of years. Whether in ancient China or Victorian Britain, everybody had his or her pet method, and vehemently opposed all alternatives. Yet hitherto everybody still agreed on one thing: in order to improve education, we need to change the schools. Today, for the first time in history, at least some people think it would be more efficient to change the pupils’ biochemistry.40
Armies are heading the same way: 12 per cent of American soldiers in Iraq and 17 per cent of American soldiers in Afghanistan took either sleeping pills or antidepressants to help them deal with the pressure and distress of war. Fear, depression and trauma are not caused by shells, booby traps or car bombs. They are caused by hormones, neurotransmitters and neural networks. Two soldiers may find themselves shoulder to shoulder in the same ambush; one will freeze in terror, lose his wits and suffer from nightmares for years after the event; the other will charge forward courageously and win a medal. The difference is in the soldiers’ biochemistry, and if we find ways to control it we will at one stroke produce both happier soldiers and more efficient armies.41
The biochemical pursuit of happiness is also the number one cause of crime in the world. In 2009 half of the inmates in US federal prisons got there because of drugs; 38 per cent of Italian prisoners were convicted of drug-related offences; 55 per cent of inmates in the UK reported that they committed their crimes in connection with either consuming or trading drugs. A 2001 report found that 62 per cent of Australian convicts were under the influence of drugs when committing the crime for which they were incarcerated.42 People drink alcohol to forget, they smoke pot to feel peaceful, they take cocaine and methamphetamines to be sharp and confident, whereas Ecstasy provides ecstatic sensations and LSD sends you to meet Lucy in the Sky with Diamonds. What some people hope to get by studying, working or raising a family, others try to obtain far more easily through the right dosage of molecules. This is an existential threat to the social and economic order, which is why countries wage a stubborn, bloody and hopeless war on biochemical crime.
The state hopes to regulate the biochemical pursuit of happiness, separating ‘bad’ manipulations from ‘good’ ones. The principle is clear: biochemical manipulations that strengthen political stability, social order and economic growth are allowed and even encouraged (e.g., those that calm hyperactive kids in school, or drive anxious soldiers forward into battle). Manipulations that threaten stability and growth are banned. But each year new drugs are born in the research labs of universities, pharmaceutical companies and criminal organisations, and the needs of the state and the market also keep changing. As the biochemical pursuit of happiness accelerates, so it will reshape politics, society and economics, and it will become ever harder to bring it under control.
And drugs are just the beginning. In research labs experts are already working on more sophisticated ways of manipulating human biochemistry, such as sending direct electrical stimuli to appropriate spots in the brain, or genetically engineering the blueprints of our bodies. No matter the exact method, gaining happiness through biological manipulation won’t be easy, for it requires altering the fundamental patterns of life. But then it wasn’t easy to overcome famine, plague and war either.
It is far from certain that humankind should invest so much effort in the biochemical pursuit of happiness. Some would argue that happiness simply isn’t important enough, and that it is misguided to regard individual satisfaction as the highest aim of human society. Others may agree that happiness is indeed the supreme good, yet would take issue with the biological definition of happiness as the experience of pleasant sensations.
Some 2,300 years ago Epicurus warned his disciples that immoderate pursuit of pleasure is likely to make them miserable rather than happy. A couple of centuries earlier Buddha had made an even more radical claim, teaching that the pursuit of pleasant sensations is in fact the very root of suffering. Such sensations are just ephemeral and meaningless vibrations. Even when we experience them, we don’t react to them with contentment; rather, we just crave more. Hence no matter how many blissful or exciting sensations I may experience, they will never satisfy me.
If I identify happiness with fleeting pleasant sensations, and crave to experience more and more of them, I have no choice but to pursue them constantly. When I finally get them, they quickly disappear, and because the mere memory of past pleasures will not satisfy me, I have to start all over again. Even if I continue this pursuit for decades, it will never bring me any lasting achievement; on the contrary, the more I crave these pleasant sensations, the more stressed and dissatisfied I will become. To attain real happiness, humans need to slow down the pursuit of pleasant sensations, not accelerate it.
This Buddhist view of happiness has a lot in common with the biochemical view. Both agree that pleasant sensations disappear as fast as they arise, and that as long as people crave pleasant sensations without actually experiencing them, they remain dissatisfied. However, this problem has two very different solutions. The biochemical solution is to develop products and treatments that will provide humans with an unending stream of pleasant sensations, so we will never be without them. The Buddha’s suggestion was to reduce our craving for pleasant sensations, and not allow them to control our lives. According to Buddha, we can train our minds to observe carefully how all sensations constantly arise and pass. When the mind learns to see our sensations for what they are – ephemeral and meaningless vibrations – we lose interest in pursuing them. For what is the point of running after something that disappears as fast as it arises?
At present, humankind has far greater interest in the biochemical solution. No matter what monks in their Himalayan caves or philosophers in their ivory towers say, for the capitalist juggernaut, happiness is pleasure. Period. With each passing year our tolerance for unpleasant sensations decreases, and our craving for pleasant sensations increases. Both scientific research and economic activity are geared to that end, each year producing better painkillers, new ice-cream flavours, more comfortable mattresses, and more addictive games for our smartphones, so that we will not suffer a single boring moment while waiting for the bus.
All this is hardly enough, of course. Since Homo sapiens was not adapted by evolution to experience constant pleasure, if that is what humankind nevertheless wants, ice cream and smartphone games will not do. It will be necessary to change our biochemistry and re-engineer our bodies and minds. So we are working on that. You may debate whether it is good or bad, but it seems that the second great project of the twenty-first century – to ensure global happiness – will involve re-engineering Homo sapiens so that it can enjoy everlasting pleasure.
In seeking bliss and immortality humans are in fact trying to upgrade themselves into gods. Not just because these are divine qualities, but because in order to overcome old age and misery humans will first have to acquire godlike control of their own biological substratum. If we ever have the power to engineer death and pain out of our system, that same power will probably be sufficient to engineer our system in almost any manner we like, and manipulate our organs, emotions and intelligence in myriad ways. You could buy for yourself the strength of Hercules, the sensuality of Aphrodite, the wisdom of Athena or the madness of Dionysus if that is what you are into. Up till now increasing human power relied mainly on upgrading our external tools. In the future it may rely more on upgrading the human body and mind, or on merging directly with our tools.
The upgrading of humans into gods may follow any of three paths: biological engineering, cyborg engineering and the engineering of non-organic beings.
Biological engineering starts with the insight that we are far from realising the full potential of organic bodies. For 4 billion years natural selection has been tweaking and tinkering with these bodies, so that we have gone from amoeba to reptiles to mammals to Sapiens. Yet there is no reason to think that Sapiens is the last station. Relatively small changes in genes, hormones and neurons were enough to transform Homo erectus – who could produce nothing more impressive than flint knives – into Homo sapiens, who produce spaceships and computers. Who knows what might be the outcome of a few more changes to our DNA, hormonal system or brain structure. Bioengineering is not going to wait patiently for natural selection to work its magic. Instead, bioengineers will take the old Sapiens body, and intentionally rewrite its genetic code, rewire its brain circuits, alter its biochemical balance, and even grow entirely new limbs. They will thereby create new godlings, who might be as different from us Sapiens as we are different from Homo erectus.
Cyborg engineering will go a step further, merging the organic body with non-organic devices such as bionic hands, artificial eyes, or millions of nano-robots that will navigate our bloodstream, diagnose problems and repair damage. Such a cyborg could enjoy abilities far beyond those of any organic body. For example, all parts of an organic body must be in direct contact with one another in order to function. If an elephant’s brain is in India, its eyes and ears in China and its feet in Australia, then this elephant is most probably dead, and even if it is in some mysterious sense alive, it cannot see, hear or walk. A cyborg, in contrast, could exist in numerous places at the same time. A cyborg doctor could perform emergency surgeries in Tokyo, in Chicago and in a space station on Mars, without ever leaving her Stockholm office. She will need only a fast Internet connection, and a few pairs of bionic eyes and hands. On second thought, why pairs? Why not quartets? Indeed, even those are actually superfluous. Why should a cyborg doctor hold a surgeon’s scalpel by hand, when she could connect her mind directly to the instrument?
This may sound like science fiction, but it’s already a reality. Monkeys have recently learned to control bionic hands and feet disconnected from their bodies, through electrodes implanted in their brains. Paralysed patients are able to move bionic limbs or operate computers by the power of thought alone. If you wish, you can already remote-control electric devices in your house using an electric ‘mind-reading’ helmet. The helmet requires no brain implants. It functions by reading the electric signals passing through your scalp. If you want to turn on the light in the kitchen, you just wear the helmet, imagine some preprogrammed mental sign (e.g., imagine your right hand moving), and the switch turns on. You can buy such helmets online for a mere $400.43
In early 2015 several hundred workers in the Epicenter high-tech hub in Stockholm had microchips implanted into their hands. The chips are about the size of a grain of rice and store personalised security information that enables workers to open doors and operate photocopiers with a wave of their hand. Soon they hope to make payments in the same way. One of the people behind the initiative, Hannes Sjoblad, explained that ‘We already interact with technology all the time. Today it’s a bit messy: we need pin codes and passwords. Wouldn’t it be easy to just touch with your hand?’44
Yet even cyborg engineering is relatively conservative, inasmuch as it assumes that organic brains will go on being the command-and-control centres of life. A bolder approach dispenses with organic parts altogether, and hopes to engineer completely non-organic beings. Neural networks will be replaced by intelligent software, which could surf both the virtual and non-virtual worlds, free from the limitations of organic chemistry. After 4 billion years of wandering inside the kingdom of organic compounds, life will break out into the vastness of the inorganic realm, and will take shapes that we cannot envision even in our wildest dreams. After all, our wildest dreams are still the product of organic chemistry.
Breaking out of the organic realm could also enable life to finally break out of planet earth. For four billion years life remained confined to this tiny speck of a planet because natural selection made all organisms utterly dependent on the unique conditions of this flying rock. Not even the toughest bacteria can survive on Mars. A non-organic artificial intelligence, in contrast, will find it far easier to colonize alien planets. The replacement of organic life by inorganic beings may therefore sow the seed of a future galactic empire, ruled by the likes of Mr. Data rather than Captain Kirk.
We don’t know where these paths might lead us, nor what our godlike descendants will look like. Foretelling the future was never easy, and revolutionary biotechnologies make it even harder. For as difficult as it is to predict the impact of new technologies in fields like transportation, communication and energy, technologies for upgrading humans pose a completely different kind of challenge. Since they can be used to transform human minds and desires, people possessing present-day minds and desires by definition cannot fathom their implications.
For thousands of years history was full of technological, economic, social and political upheavals. Yet one thing remained constant: humanity itself. Our tools and institutions are very different from those of biblical times, but the deep structures of the human mind remain the same. This is why we can still find ourselves between the pages of the Bible, in the writings of Confucius or within the tragedies of Sophocles and Euripides. These classics were created by humans just like us, hence we feel that they talk about us. In modern theatre productions, Oedipus, Hamlet and Othello may wear jeans and T-shirts and have Facebook accounts, but their emotional conflicts are the same as in the original play.
However, once technology enables us to re-engineer human minds, Homo sapiens will disappear, human history will come to an end and a completely new kind of process will begin, which people like you and me cannot comprehend. Many scholars try to predict how the world will look in the year 2100 or 2200. This is a waste of time. Any worthwhile prediction must take into account the ability to re-engineer human minds, and this is impossible. There are many wise answers to the question, ‘What would people with minds like ours do with biotechnology?’ Yet there are no good answers to the question, ‘What would beings with a different kind of mind do with biotechnology?’ All we can say is that people similar to us are likely to use biotechnology to re-engineer their own minds, and our present-day minds cannot grasp what might happen next.
Though the details are therefore obscure, we can nevertheless be sure about the general direction of history. In the twenty-first century, the third big project of humankind will be to acquire for us divine powers of creation and destruction, and upgrade Homo sapiens into Homo deus. This third project obviously subsumes the first two projects, and is fuelled by them. We want the ability to re-engineer our bodies and minds in order, above all, to escape old age, death and misery, but once we have it, who knows what else we might do with such ability? So we may well think of the new human agenda as consisting really of only one project (with many branches): attaining divinity.
If this sounds unscientific or downright eccentric, it is because people often misunderstand the meaning of divinity. Divinity isn’t a vague metaphysical quality. And it isn’t the same as omnipotence. When speaking of upgrading humans into gods, think more in terms of Greek gods or Hindu devas rather than the omnipotent biblical sky father. Our descendants would still have their foibles, kinks and limitations, just as Zeus and Indra had theirs. But they could love, hate, create and destroy on a much grander scale than us.
Throughout history most gods were believed to enjoy not omnipotence but rather specific super-abilities such as the ability to design and create living beings; to transform their own bodies; to control the environment and the weather; to read minds and to communicate at a distance; to travel at very high speeds; and of course to escape death and live indefinitely. Humans are in the business of acquiring all these abilities, and then some. Certain traditional abilities that were considered divine for many millennia have today become so commonplace that we hardly think about them. The average person now moves and communicates across distances much more easily than the Greek, Hindu or African gods of old.
For example, the Igbo people of Nigeria believe that the creator god Chukwu initially wanted to make people immortal. He sent a dog to tell humans that when someone dies, they should sprinkle ashes on the corpse, and the body will come back to life. Unfortunately, the dog was tired and he dallied on the way. The impatient Chukwu then sent a sheep, telling her to make haste with this important message. Alas, when the breathless sheep reached her destination, she garbled the instructions, and told the humans to bury their dead, thus making death permanent. This is why to this day we humans must die. If only Chukwu had a Twitter account instead of relying on laggard dogs and dim-witted sheep to deliver his messages!
In ancient agricultural societies, many religions displayed surprisingly little interest in metaphysical questions and the afterlife. Instead, they focused on the very mundane issue of increasing agricultural output. Thus the Old Testament God never promises any rewards or punishments after death. He instead tells the people of Israel that ‘If you carefully observe the commands that I’m giving you [. . .] then I will send rain on the land in its season [. . .] and you’ll gather grain, wine, and oil. I will provide grass in the fields for your livestock, and you’ll eat and be satisfied. Be careful! Otherwise, your hearts will deceive you and you will turn away to serve other gods and worship them. The wrath of God will burn against you so that he will restrain the heavens and it won’t rain. The ground won’t yield its produce and you’ll be swiftly destroyed from the good land that the Lord is about to give you’ (Deuteronomy 11:13–17). Scientists today can do much better than the Old Testament God. Thanks to artificial fertilisers, industrial insecticides and genetically modified crops, agricultural production nowadays outstrips the highest expectations ancient farmers had of their gods. And the parched state of Israel no longer fears that some angry deity will restrain the heavens and stop all rain – for the Israelis have recently built a huge desalination plant on the shores of the Mediterranean, so they can now get all their drinking water from the sea.
So far we have competed with the gods of old by creating better and better tools. In the not too distant future, we might create superhumans who will outstrip the ancient gods not in their tools, but in their bodily and mental faculties. If and when we get there, however, divinity will become as mundane as cyberspace – a wonder of wonders that we just take for granted.
We can be quite certain that humans will make a bid for divinity, because humans have many reasons to desire such an upgrade, and many ways to achieve it. Even if one promising path turns out to be a dead end, alternative routes will remain open. For example, we may discover that the human genome is far too complicated for serious manipulation, but this will not prevent the development of brain–computer interfaces, nano-robots or artificial intelligence.
No need to panic, though. At least not immediately. Upgrading Sapiens will be a gradual historical process rather than a Hollywood apocalypse. Homo sapiens is not going to be exterminated by a robot revolt. Rather, Homo sapiens is likely to upgrade itself step by step, merging with robots and computers in the process, until our descendants will look back and realise that they are no longer the kind of animal that wrote the Bible, built the Great Wall of China and laughed at Charlie Chaplin’s antics. This will not happen in a day, or a year. Indeed, it is already happening right now, through innumerable mundane actions. Every day millions of people decide to grant their smartphone a bit more control over their lives or try a new and more effective antidepressant drug. In pursuit of health, happiness and power, humans will gradually change first one of their features and then another, and another, until they will no longer be human.
Calm explanations aside, many people panic when they hear of such possibilities. They are happy to follow the advice of their smartphones or to take whatever drug the doctor prescribes, but when they hear of upgraded superhumans, they say: ‘I hope I will be dead before that happens.’ A friend once told me that what she fears most about growing old is becoming irrelevant, turning into a nostalgic old woman who cannot understand the world around her, or contribute much to it. This is what we fear collectively, as a species, when we hear of superhumans. We sense that in such a world, our identity, our dreams and even our fears will be irrelevant, and we will have nothing more to contribute. Whatever you are today – be it a devout Hindu cricket player or an aspiring lesbian journalist – in an upgraded world you will feel like a Neanderthal hunter in Wall Street. You won’t belong.
The Neanderthals didn’t have to worry about the Nasdaq, since they were shielded from it by tens of thousands of years. Nowadays, however, our world of meaning might collapse within decades. You cannot count on death to save you from becoming completely irrelevant. Even if gods don’t walk our streets by 2100, the attempt to upgrade Homo sapiens is likely to change the world beyond recognition in this century. Scientific research and technological developments are moving at a far faster rate than most of us can grasp.
If you speak with the experts, many of them will tell you that we are still very far away from genetically engineered babies or human-level artificial intelligence. But most experts think on a timescale of academic grants and college jobs. Hence, ‘very far away’ may mean twenty years, and ‘never’ may denote no more than fifty.
I still remember the day I first came across the Internet. It was back in 1993, when I was in high school. I went with a couple of buddies to visit our friend Ido (who is now a computer scientist). We wanted to play table tennis. Ido was already a huge computer fan, and before opening the ping-pong table he insisted on showing us the latest wonder. He connected the phone cable to his computer and pressed some keys. For a minute all we could hear were squeaks, shrieks and buzzes, and then silence. It didn’t succeed. We mumbled and grumbled, but Ido tried again. And again. And again. At last he gave a whoop and announced that he had managed to connect his computer to the central computer at the nearby university. ‘And what’s there, on the central computer?’ we asked. ‘Well,’ he admitted, ‘there’s nothing there yet. But you could put all kinds of things there.’ ‘Like what?’ we questioned. ‘I don’t know,’ he said, ‘all kinds of things.’ It didn’t sound very promising. We went to play ping-pong, and for the following weeks enjoyed a new pastime, making fun of Ido’s ridiculous idea. That was less than twenty-five years ago (at the time of writing). Who knows what will come to pass twenty-five years from now?
That’s why more and more individuals, organisations, corporations and governments are taking very seriously the quest for immortality, happiness and godlike powers. Insurance companies, pension funds, health systems and finance ministries are already aghast at the jump in life expectancy. People are living much longer than expected, and there is not enough money to pay for their pensions and medical treatment. As seventy threatens to become the new forty, experts are calling to raise the retirement age, and to restructure the entire job market.
When people realise how fast we are rushing towards the great unknown, and that they cannot count even on death to shield them from it, their reaction is to hope that somebody will hit the brakes and slow us down. But we cannot hit the brakes, for several reasons.
Firstly, nobody knows where the brakes are. While some experts are familiar with developments in one field, such as artificial intelligence, nanotechnology, big data or genetics, no one is an expert on everything. No one is therefore capable of connecting all the dots and seeing the full picture. Different fields influence one another in such intricate ways that even the best minds cannot fathom how breakthroughs in artificial intelligence might impact nanotechnology, or vice versa. Nobody can absorb all the latest scientific discoveries, nobody can predict how the global economy will look in ten years, and nobody has a clue where we are heading in such a rush. Since no one understands the system any more, no one can stop it.
Secondly, if we somehow succeed in hitting the brakes, our economy will collapse, along with our society. As explained in a later chapter, the modern economy needs constant and indefinite growth in order to survive. If growth ever stops, the economy won’t settle down to some cosy equilibrium; it will fall to pieces. That’s why capitalism encourages us to seek immortality, happiness and divinity. There’s a limit to how many shoes we can wear, how many cars we can drive and how many skiing holidays we can enjoy. An economy built on everlasting growth needs endless projects – just like the quests for immortality, bliss and divinity.
Well, if we need endless projects, why not settle for bliss and immortality, and at least put aside the frightening quest for superhuman powers? Because it is inextricable from the other two. When you develop bionic legs that enable paraplegics to walk again, you can also use the same technology to upgrade healthy people. When you discover how to stop memory loss among older people, the same treatments might enhance the memory of the young.
No clear line separates healing from upgrading. Medicine almost always begins by saving people from falling below the norm, but the same tools and know-how can then be used to surpass the norm. Viagra began life as a treatment for blood-pressure problems. To the surprise and delight of Pfizer, it transpired that Viagra can also overcome impotence. It enabled millions of men to regain normal sexual abilities; but soon enough men who had no impotence problems in the first place began using the same pill to surpass the norm, and acquire sexual powers they never had before.45
What happens to particular drugs can also happen to entire fields of medicine. Modern plastic surgery was born in the First World War, when Harold Gillies began treating facial injuries in the Aldershot military hospital.46 When the war was over, surgeons discovered that the same techniques could also turn perfectly healthy but ugly noses into more beautiful specimens. Though plastic surgery continued to help the sick and wounded, it devoted increasing attention to upgrading the healthy. Nowadays plastic surgeons make millions in private clinics whose explicit and sole aim is to upgrade the healthy and beautify the wealthy.47
The same might happen with genetic engineering. If a billionaire openly stated that he intended to engineer super-smart offspring, imagine the public outcry. But it won’t happen like that. We are more likely to slide down a slippery slope. It begins with parents whose genetic profile puts their children at high risk of deadly genetic diseases. So they perform in vitro fertilisation, and test the DNA of the fertilised egg. If everything is in order, all well and good. But if the DNA test discovers the dreaded mutations – the embryo is destroyed.
Yet why take a chance by fertilising just one egg? Better fertilise several, so that even if three or four are defective there is at least one good embryo. When this in vitro selection procedure becomes acceptable and cheap enough, its usage may spread. Mutations are a ubiquitous risk. All people carry in their DNA some harmful mutations and less-than-optimal alleles. Sexual reproduction is a lottery. (A famous – and probably apocryphal – anecdote tells of a meeting in 1923 between Nobel Prize laureate Anatole France and the beautiful and talented dancer Isadora Duncan. Discussing the then popular eugenics movement, Duncan said, ‘Just imagine a child with my beauty and your brains!’ France responded, ‘Yes, but imagine a child with my beauty and your brains.’) Well then, why not rig the lottery? Fertilise several eggs, and choose the one with the best combination. Once stem-cell research enables us to create an unlimited supply of human embryos on the cheap, you can select your optimal baby from among hundreds of candidates, all carrying your DNA, all perfectly natural, and none requiring any futuristic genetic engineering. Iterate this procedure for a few generations, and you could easily end up with superhumans (or a creepy dystopia).
But what if after fertilising even numerous eggs, you find that all of them contain some deadly mutations? Should you destroy all the embryos? Instead of doing that, why not replace the problematic genes? A breakthrough case involves mitochondrial DNA. Mitochondria are tiny organelles within human cells, which produce the energy used by the cell. They have their own set of genes, which is completely separate from the DNA in the cell’s nucleus. Defective mitochondrial DNA leads to various debilitating or even deadly diseases. It is technically feasible with current in vitro technology to overcome mitochondrial genetic diseases by creating a ‘three-parent baby’. The baby’s nuclear DNA comes from two parents, while the mitochondrial DNA comes from a third person. In 2000 Sharon Saarinen from West Bloomfield, Michigan, gave birth to a healthy baby girl, Alana. Alana’s nuclear DNA came from her mother, Sharon, and her father, Paul, but her mitochondrial DNA came from another woman. From a purely technical perspective, Alana has three biological parents. A year later, in 2001, the US government banned this treatment, due to safety and ethical concerns.48
However, on 3 February 2015 the British Parliament voted in favour of the so-called ‘three-parent embryo’ law, allowing this treatment – and related research – in the UK.49 At present it is technically unfeasible, and illegal, to replace nuclear DNA, but if and when the technical difficulties are solved, the same logic that favoured the replacement of defective mitochondrial DNA would seem to warrant doing the same with nuclear DNA.
Following selection and replacement, the next potential step is amendment. Once it becomes possible to amend deadly genes, why go through the hassle of inserting some foreign DNA, when you can just rewrite the code and turn a dangerous mutant gene into its benign version? Then we might start using the same mechanism to fix not just lethal genes, but also those responsible for less deadly illnesses, for autism, for stupidity and for obesity. Who would like his or her child to suffer from any of these? Suppose a genetic test indicates that your would-be daughter will in all likelihood be smart, beautiful and kind – but will suffer from chronic depression. Wouldn’t you want to save her from years of misery by a quick and painless intervention in the test tube?
And while you are at it, why not give the child a little push? Life is hard and challenging even for healthy people. So it would surely come in handy if the little girl had a stronger-than-normal immune system, an above-average memory or a particularly sunny disposition. And even if you don’t want that for your child – what if the neighbours are doing it for theirs? Would you have your child lag behind? And if the government forbids all citizens from engineering their babies, what if the North Koreans are doing it and producing amazing geniuses, artists and athletes that far outperform ours? And like that, in baby steps, we are on our way to a genetic child catalogue.
Healing is the initial justification for every upgrade. Find some professors experimenting in genetic engineering or brain–computer interfaces, and ask them why they are engaged in such research. In all likelihood they would reply that they are doing it to cure disease. ‘With the help of genetic engineering,’ they would explain, ‘we could defeat cancer. And if we could connect brains and computers directly, we could cure schizophrenia.’ Maybe, but it will surely not end there. When we successfully connect brains and computers, will we use this technology only to cure schizophrenia? If anybody really believes this, then they may know a great deal about brains and computers, but far less about the human psyche and human society. Once you achieve a momentous breakthrough, you cannot restrict its use to healing and completely forbid using it for upgrading.
Of course humans can and do limit their use of new technologies. Thus the eugenics movement fell from favour after the Second World War, and though trade in human organs is now both possible and potentially very lucrative, it has so far remained a peripheral activity. Designer babies may one day become as technologically feasible as murdering people to harvest their organs – yet remain as peripheral.
Just as we have escaped the clutches of Chekhov’s Law in warfare, we can also escape them in other fields of action. Some guns appear on stage without ever being fired. This is why it is so vital to think about humanity’s new agenda. Precisely because we have some choice regarding the use of new technologies, we had better understand what is happening and make up our minds about it before it makes up our minds for us.
The prediction that in the twenty-first century humankind is likely to aim for immortality, bliss and divinity may anger, alienate or frighten any number of people, so a few clarifications are in order.
Firstly, this is not what most individuals will actually do in the twenty-first century. It is what humankind as a collective will do. Most people will probably play only a minor role, if any, in these projects. Even if famine, plague and war become less prevalent, billions of humans in developing countries and seedy neighbourhoods will continue to deal with poverty, illness and violence even as the elites are already reaching for eternal youth and godlike powers. This seems patently unjust. One could argue that as long as there is a single child dying from malnutrition or a single adult killed in drug-lord warfare, humankind should focus all its efforts on combating these woes. Only once the last sword is beaten into a ploughshare should we turn our minds to the next big thing. But history doesn’t work like that. Those living in palaces have always had different agendas to those living in shacks, and that is unlikely to change in the twenty-first century.
Secondly, this is a historical prediction, not a political manifesto. Even if we disregard the fate of slum-dwellers, it is far from clear that we should be aiming at immortality, bliss and divinity. Adopting these particular projects might be a big mistake. But history is full of big mistakes. Given our past record and our current values, we are likely to reach out for bliss, divinity and immortality – even if it kills us.
Thirdly, reaching out is not the same as obtaining. History is often shaped by exaggerated hopes. Twentieth-century Russian history was largely shaped by the communist attempt to overcome inequality, but it didn’t succeed. My prediction is focused on what humankind will try to achieve in the twenty-first century – not what it will succeed in achieving. Our future economy, society and politics will be shaped by the attempt to overcome death. It does not follow that in 2100 humans will be immortal.
Fourthly, and most importantly, this prediction is less of a prophecy and more a way of discussing our present choices. If the discussion makes us choose differently, so that the prediction is proven wrong, all the better. What’s the point of making predictions if they cannot change anything?
Some complex systems, such as the weather, are oblivious to our predictions. The process of human development, in contrast, reacts to them. Indeed, the better our forecasts, the more reactions they engender. Hence paradoxically, as we accumulate more data and increase our computing power, events become wilder and more unexpected. The more we know, the less we can predict. Imagine, for example, that one day experts decipher the basic laws of the economy. Once this happens, banks, governments, investors and customers will begin to use this new knowledge to act in novel ways, and gain an edge over their competitors. For what is the use of new knowledge if it doesn’t lead to novel behaviours? Alas, once people change the way they behave, the economic theories become obsolete. We may know how the economy functioned in the past – but we no longer understand how it functions in the present, not to mention the future.
This is not a hypothetical example. In the middle of the nineteenth century Karl Marx reached brilliant economic insights. Based on these insights he predicted an increasingly violent conflict between the proletariat and the capitalists, ending with the inevitable victory of the former and the collapse of the capitalist system. Marx was certain that the revolution would start in countries that spearheaded the Industrial Revolution – such as Britain, France and the USA – and spread to the rest of the world.
Marx forgot that capitalists know how to read. At first only a handful of disciples took Marx seriously and read his writings. But as these socialist firebrands gained adherents and power, the capitalists became alarmed. They too perused Das Kapital, adopting many of the tools and insights of Marxist analysis. In the twentieth century everybody from street urchins to presidents embraced a Marxist approach to economics and history. Even diehard capitalists who vehemently resisted the Marxist prognosis still made use of the Marxist diagnosis. When the CIA analysed the situation in Vietnam or Chile in the 1960s, it divided society into classes. When Nixon or Thatcher looked at the globe, they asked themselves who controls the vital means of production. From 1989 to 1991 George Bush oversaw the demise of the Evil Empire of communism, only to be defeated in the 1992 elections by Bill Clinton. Clinton’s winning campaign strategy was summarised in the motto: ‘It’s the economy, stupid.’ Marx could not have said it better.
As people adopted the Marxist diagnosis, they changed their behaviour accordingly. Capitalists in countries such as Britain and France strove to better the lot of the workers, strengthen their national consciousness and integrate them into the political system. Consequently when workers began voting in elections and Labour gained power in one country after another, the capitalists could still sleep soundly in their beds. As a result, Marx’s predictions came to naught. Communist revolutions never engulfed the leading industrial powers such as Britain, France and the USA, and the dictatorship of the proletariat was consigned to the dustbin of history.
This is the paradox of historical knowledge. Knowledge that does not change behaviour is useless. But knowledge that changes behaviour quickly loses its relevance. The more data we have and the better we understand history, the faster history alters its course, and the faster our knowledge becomes outdated.
Centuries ago human knowledge increased slowly, so politics and economics changed at a leisurely pace too. Today our knowledge is increasing at breakneck speed, and theoretically we should understand the world better and better. But the very opposite is happening. Our new-found knowledge leads to faster economic, social and political changes; in an attempt to understand what is happening, we accelerate the accumulation of knowledge, which leads only to faster and greater upheavals. Consequently we are less and less able to make sense of the present or forecast the future. In 1016 it was relatively easy to predict how Europe would look in 1050. Sure, dynasties might fall, unknown raiders might invade, and natural disasters might strike; yet it was clear that in 1050 Europe would still be ruled by kings and priests, that it would be an agricultural society, that most of its inhabitants would be peasants, and that it would continue to suffer greatly from famines, plagues and wars. In contrast, in 2016 we have no idea how Europe will look in 2050. We cannot say what kind of political system it will have, how its job market will be structured, or even what kind of bodies its inhabitants will possess.
If history doesn’t follow any stable rules, and if we cannot predict its future course, why study it? It often seems that the chief aim of science is to predict the future – meteorologists are expected to forecast whether tomorrow will bring rain or sunshine; economists should know whether devaluing the currency will avert or precipitate an economic crisis; good doctors foresee whether chemotherapy or radiation therapy will be more successful in curing lung cancer. Similarly, historians are asked to examine the actions of our ancestors so that we can repeat their wise decisions and avoid their mistakes. But it almost never works like that because the present is just too different from the past. It is a waste of time to study Hannibal’s tactics in the Second Punic War so as to copy them in the Third World War. What worked well in cavalry battles will not necessarily be of much benefit in cyber warfare.
Science is not just about predicting the future, though. Scholars in all fields often seek to broaden our horizons, thereby opening before us new and unknown futures. This is especially true of history. Though historians occasionally try their hand at prophecy (without notable success), the study of history aims above all to make us aware of possibilities we don’t normally consider. Historians study the past not in order to repeat it, but in order to be liberated from it.
Each and every one of us has been born into a given historical reality, ruled by particular norms and values, and managed by a unique economic and political system. We take this reality for granted, thinking it is natural, inevitable and immutable. We forget that our world was created by an accidental chain of events, and that history shaped not only our technology, politics and society, but also our thoughts, fears and dreams. The cold hand of the past emerges from the grave of our ancestors, grips us by the neck and directs our gaze towards a single future. We have felt that grip from the moment we were born, so we assume that it is a natural and inescapable part of who we are. Therefore we seldom try to shake ourselves free, and envision alternative futures.
Studying history aims to loosen the grip of the past. It enables us to turn our head this way and that, and begin to notice possibilities that our ancestors could not imagine, or didn’t want us to imagine. By observing the accidental chain of events that led us here, we realise how our very thoughts and dreams took shape – and we can begin to think and dream differently. Studying history will not tell us what to choose, but at least it gives us more options.
Movements seeking to change the world often begin by rewriting history, thereby enabling people to reimagine the future. Whether you want workers to go on a general strike, women to take possession of their bodies, or oppressed minorities to demand political rights – the first step is to retell their history. The new history will explain that ‘our present situation is neither natural nor eternal. Things were different once. Only a string of chance events created the unjust world we know today. If we act wisely, we can change that world, and create a much better one.’ This is why Marxists recount the history of capitalism; why feminists study the formation of patriarchal societies; and why African Americans commemorate the horrors of the slave trade. They aim not to perpetuate the past, but rather to be liberated from it.
What’s true of grand social revolutions is equally true at the micro level of everyday life. A young couple building a new home for themselves may ask the architect for a nice lawn in the front yard. Why a lawn? ‘Because lawns are beautiful,’ the couple might explain. But why do they think so? It has a history behind it.
Stone Age hunter-gatherers did not cultivate grass at the entrance to their caves. No green meadow welcomed the visitors to the Athenian Acropolis, the Roman Capitol, the Jewish Temple in Jerusalem or the Forbidden City in Beijing. The idea of nurturing a lawn at the entrance to private residences and public buildings was born in the castles of French and English aristocrats in the late Middle Ages. In the early modern age this habit struck deep roots, and became the trademark of nobility.
Well-kept lawns demanded land and a lot of work, particularly in the days before lawnmowers and automatic water sprinklers. In exchange, they produce nothing of value. You can’t even graze animals on them, because they would eat and trample the grass. Poor peasants could not afford wasting precious land or time on lawns. The neat turf at the entrance to chateaux was accordingly a status symbol nobody could fake. It boldly proclaimed to every passerby: ‘I am so rich and powerful, and I have so many acres and serfs, that I can afford this green extravaganza.’ The bigger and neater the lawn, the more powerful the dynasty. If you came to visit a duke and saw that his lawn was in bad shape, you knew he was in trouble.50
The precious lawn was often the setting for important celebrations and social events, and at all other times was strictly off-limits. To this day, in countless palaces, government buildings and public venues a stern sign commands people to ‘Keep off the grass’. In my former Oxford college the entire quad was formed of a large, attractive lawn, on which we were allowed to walk or sit on only one day a year. On any other day, woe to the poor student whose foot desecrated the holy turf.
Royal palaces and ducal chateaux turned the lawn into a symbol of authority. When in the late modern period kings were toppled and dukes were guillotined, the new presidents and prime ministers kept the lawns. Parliaments, supreme courts, presidential residences and other public buildings increasingly proclaimed their power in row upon row of neat green blades. Simultaneously, lawns conquered the world of sports. For thousands of years humans played on almost every conceivable kind of ground, from ice to desert. Yet in the last two centuries, the really important games – such as football and tennis – are played on lawns. Provided, of course, you have money. In the favelas of Rio de Janeiro the future generation of Brazilian football is kicking makeshift balls over sand and dirt. But in the wealthy suburbs, the sons of the rich are enjoying themselves over meticulously kept lawns.
6. The lawns of Château de Chambord, in the Loire Valley. King François I built it in the early sixteenth century. This is where it all began.
6. © CHICUREL Arnaud/Getty Images.
7. A welcoming ceremony in honour of Queen Elizabeth II – on the White House lawn.
7. © American Spirit/Shutterstock.com.
8. Mario Götze scores the decisive goal, giving Germany the World Cup in 2014 – on the Maracanã lawn.
8. © Imagebank/Chris Brunskill/Getty Images/Bridgeman Images.
9. Petit-bourgeois paradise.
9. © H. Armstrong Roberts/ClassicStock/Getty Images.
Humans thereby came to identify lawns with political power, social status and economic wealth. No wonder that in the nineteenth century the rising bourgeoisie enthusiastically adopted the lawn. At first only bankers, lawyers and industrialists could afford such luxuries at their private residences. Yet when the Industrial Revolution broadened the middle class and gave rise to the lawnmower and then the automatic sprinkler, millions of families could suddenly afford a home turf. In American suburbia a spickand-span lawn switched from being a rich person’s luxury into a middle-class necessity.
This was when a new rite was added to the suburban liturgy. After Sunday morning service at church, many people devotedly mowed their lawns. Walking along the streets, you could quickly ascertain the wealth and position of every family by the size and quality of their turf. There is no surer sign that something is wrong at the Joneses’ than a neglected lawn in the front yard. Grass is nowadays the most widespread crop in the USA after maize and wheat, and the lawn industry (plants, manure, mowers, sprinklers, gardeners) accounts for billions of dollars every year.51
The lawn did not remain solely a European or American craze. Even people who have never visited the Loire Valley see US presidents giving speeches on the White House lawn, important football games played out in green stadiums, and Homer and Bart Simpson quarrelling about whose turn it is to mow the grass. People all over the globe associate lawns with power, money and prestige. The lawn has therefore spread far and wide, and is now set to conquer even the heart of the Muslim world. Qatar’s newly built Museum of Islamic Art is flanked by magnificent lawns that hark back to Louis XIV’s Versailles much more than to Haroun al-Rashid’s Baghdad. They were designed and constructed by an American company, and their more than 100,000 square yards of grass – in the midst of the Arabian desert – require a stupendous amount of fresh water each day to stay green. Meanwhile, in the suburbs of Doha and Dubai, middle-class families pride themselves on their lawns. If it were not for the white robes and black hijabs, you could easily think you were in the Midwest rather than the Middle East.
Having read this short history of the lawn, when you now come to plan your dream house you might think twice about having a lawn in the front yard. You are of course still free to do it. But you are also free to shake off the cultural cargo bequeathed to you by European dukes, capitalist moguls and the Simpsons – and imagine for yourself a Japanese rock garden, or some altogether new creation. This is the best reason to learn history: not in order to predict the future, but to free yourself of the past and imagine alternative destinies. Of course this is not total freedom – we cannot avoid being shaped by the past. But some freedom is better than none.
All the predictions that pepper this book are no more than an attempt to discuss present-day dilemmas, and an invitation to change the future. Predicting that humankind will try to gain immortality, bliss and divinity is much like predicting that people building a house will want a lawn in their front yard. It sounds very likely. But once you say it out loud, you can begin to think about alternatives.
People are taken aback by dreams of immortality and divinity not because they sound so foreign and unlikely, but because it is uncommon to be so blunt. Yet when they start thinking about it, most people realise that it actually makes a lot of sense. Despite the technological hubris of these dreams, ideologically they are old news. For 300 years the world has been dominated by humanism, which sanctifies the life, happiness and power of Homo sapiens. The attempt to gain immortality, bliss and divinity merely takes the long-standing humanist ideals to their logical conclusion. It places openly on the table what we have for a long time kept hidden under our napkin.
Yet I would now like to place something else on the table: a gun. A gun that appears in Act I, to fire in Act III. The following chapters discuss how humanism – the worship of humankind – has conquered the world. Yet the rise of humanism also contains the seeds of its downfall. While the attempt to upgrade humans into gods takes humanism to its logical conclusion, it simultaneously exposes humanism’s inherent flaws. If you start with a flawed ideal, you often appreciate its defects only when the ideal is close to realisation.
We can already see this process at work in geriatric hospital wards. Due to an uncompromising humanist belief in the sanctity of human life, we keep people alive till they reach such a pitiful state that we are forced to ask, ‘What exactly is so sacred here?’ Due to similar humanist beliefs, in the twenty-first century we are likely to push humankind as a whole beyond its limits. The same technologies that can upgrade humans into gods might also make humans irrelevant. For example, computers powerful enough to understand and overcome the mechanisms of ageing and death will probably also be powerful enough to replace humans in any and all tasks.
Hence the real agenda in the twenty-first century is going to be far more complicated than what this long opening chapter has suggested. At present it might seem that immortality, bliss and divinity occupy the top slots on our agenda. But once we come nearer to achieving these goals the resulting upheavals are likely to deflect us towards entirely different destinations. The future described in this chapter is merely the future of the past – i.e., a future based on the ideas and hopes that dominated the world for the last 300 years. The real future – i.e., a future born of the new ideas and hopes of the twenty-first century – might be completely different.
To understand all this we need to go back and investigate who Homo sapiens really is, how humanism became the dominant world religion and why attempting to fulfil the humanist dream is likely to cause its disintegration. This is the basic plan of the book.
The first part of the book looks at the relationship between Homo sapiens and other animals, in an attempt to comprehend what makes our species so special. Some readers may wonder why animals receive so much attention in a book about the future. In my view, you cannot have a serious discussion about the nature and future of humankind without beginning with our fellow animals. Homo sapiens does its best to forget the fact, but it is an animal. And it is doubly important to remember our origins at a time when we seek to turn ourselves into gods. No investigation of our divine future can ignore our own animal past, or our relations with other animals – because the relationship between humans and animals is the best model we have for future relations between superhumans and humans. You want to know how super-intelligent cyborgs might treat ordinary flesh-and-blood humans? Better start by investigating how humans treat their less intelligent animal cousins. It’s not a perfect analogy, of course, but it is the best archetype we can actually observe rather than just imagine.
Based on the conclusions of this first part, the second part of the book examines the bizarre world Homo sapiens has created in the last millennia, and the path that took us to our present crossroads. How did Homo sapiens come to believe in the humanist creed, according to which the universe revolves around humankind and humans are the source of all meaning and authority? What are the economic, social and political implications of this creed? How does it shape our daily life, our art and our most secret desires?
The third and last part of the book comes back to the early twenty-first century. Based on a much deeper understanding of humankind and of the humanist creed, it describes our current predicament and our possible futures. Why might attempts to fulfil humanism result in its downfall? How would the search for immortality, bliss and divinity shake the foundations of our belief in humanity? What signs foretell this cataclysm, and how is it reflected in the day-to-day decisions each of us makes? And if humanism is indeed in danger, what might take its place? This part of the book does not consist of mere philosophising or idle future-telling. Rather, it scrutinises our smartphones, dating practices and job market for clues of things to come.
For humanist true-believers, all this may sound very pessimistic and depressing. But it is best not to jump to conclusions. History has witnessed the rise and fall of many religions, empires and cultures. Such upheavals are not necessarily bad. Humanism has dominated the world for 300 years, which is not such a long time. The pharaohs ruled Egypt for 3,000 years, and the popes dominated Europe for a millennium. If you told an Egyptian in the time of Ramses II that one day the pharaohs will be gone, he would probably have been aghast. ‘How can we live without a pharaoh? Who will ensure order, peace and justice?’ If you told people in the Middle Ages that within a few centuries God will be dead, they would have been horrified. ‘How can we live without God? Who will give life meaning and protect us from chaos?’
Looking back, many think that the downfall of the pharaohs and the death of God were both positive developments. Maybe the collapse of humanism will also be beneficial. People are usually afraid of change because they fear the unknown. But the single greatest constant of history is that everything changes.
10. King Ashurbanipal of Assyria slaying a lion: mastering the animal kingdom.
10. © De Agostini Picture Library/G. Nimatallah/Bridgeman Images.
What is the difference between humans and all other animals?
How did our species conquer the world?
Is Homo sapiens a superior life form, or just the local bully?
With regard to other animals, humans have long since become gods. We don’t like to reflect on this too deeply, because we have not been particularly just or merciful gods. If you watch the National Geographic channel, go to a Disney film or read a book of fairy tales, you might easily get the impression that planet Earth is populated mainly by lions, wolves and tigers who are an equal match for us humans. Simba the lion king holds sway over the forest animals; Little Red Riding Hood tries to evade the Big Bad Wolf; and little Mowgli bravely confronts Shere Khan the tiger. But in reality, they are no longer there. Our televisions, books, fantasies and nightmares are still full of them, but the Simbas, Shere Khans and Big Bad Wolves of our planet are disappearing. The world is populated mainly by humans and their domesticated animals.
How many wolves live today in Germany, the land of the Grimm brothers, Little Red Riding Hood and the Big Bad Wolf? Less than a hundred. (And even these are mostly Polish wolves that stole over the border in recent years.) In contrast, Germany is home to 5 million domesticated dogs. Altogether about 200,000 wild wolves still roam the earth, but there are more than 400 million domesticated dogs.1 The world contains 40,000 lions compared to 600 million house cats; 900,000 African buffalo versus 1.5 billion domesticated cows; 50 million penguins and 20 billion chickens.2 Since 1970, despite growing ecological awareness, wildlife populations have halved (not that they were prospering in 1970).3 In 1980 there were 2 billion wild birds in Europe. In 2009 only 1.6 billion were left. In the same year, Europeans raised 1.9 billion chickens for meat and eggs.4 At present, more than 90 per cent of the large animals of the world (i.e., those weighing more than a few pounds) are either humans or domesticated animals.
Scientists divide the history of our planet into epochs such as the Pleistocene, the Pliocene and the Miocene. Officially, we live in the Holocene epoch. Yet it may be better to call the last 70,000 years the Anthropocene epoch: the epoch of humanity. For during these millennia Homo sapiens became the single most important agent of change in the global ecology.5
This is an unprecedented phenomenon. Since the appearance of life, about 4 billion years ago, never has a single species changed the global ecology all by itself. Though there had been no lack of ecological revolutions and mass-extinction events, these were not caused by the actions of a particular lizard, bat or fungus. Rather, they were caused by the workings of mighty natural forces such as climate change, tectonic plate movement, volcanic eruptions and asteroid collisions.
11. Pie chart of global biomass of large animals.
11. Illustration: pie chart of global biomass of large animals.
Some people fear that today we are again in mortal danger of massive volcanic eruptions or colliding asteroids. Hollywood producers make billions out of these anxieties. Yet in reality, the danger is slim. Mass extinctions occur once every many millions of years. Yes, a big asteroid will probably hit our planet sometime in the next 100 million years, but it is very unlikely to happen next Tuesday. Instead of fearing asteroids, we should fear ourselves.
For Homo sapiens has rewritten the rules of the game. This single ape species has managed within 70,000 years to change the global ecosystem in radical and unprecedented ways. Our impact is already on a par with that of ice ages and tectonic movements. Within a century, our impact may surpass that of the asteroid that killed off the dinosaurs 65 million years ago.
That asteroid changed the trajectory of terrestrial evolution, but not its fundamental rules, which have remained fixed since the appearance of the first organisms 4 billion years ago. During all those aeons, whether you were a virus or a dinosaur, you evolved according to the unchanging principles of natural selection. In addition, no matter what strange and bizarre shapes life adopted, it remained confined to the organic realm – whether a cactus or a whale, you were made of organic compounds. Now humankind is poised to replace natural selection with intelligent design, and to extend life from the organic realm into the inorganic.
Even if we leave aside these future prospects and only look back on the last 70,000 years, it is evident that the Anthropocene has altered the world in unprecedented ways. Asteroids, plate tectonics and climate change may have impacted organisms all over the globe, but their influence differed from one area to another. The planet never constituted a single ecosystem; rather, it was a collection of many loosely connected ecosystems. When tectonic movements joined North America with South America it led to the extinction of most South American marsupials, but had no detrimental effect on the Australian kangaroo. When the last ice age reached its peak 20,000 years ago, jellyfish in the Persian Gulf and jellyfish in Tokyo Bay both had to adapt to the new climate. Yet since there was no connection between the two populations, each reacted in a different way, evolving in distinct directions.
In contrast, Sapiens broke the barriers that had separated the globe into independent ecological zones. In the Anthropocene, the planet became for the first time a single ecological unit. Australia, Europe and America continued to have different climates and topographies, yet humans caused organisms from throughout the world to mingle on a regular basis, irrespective of distance and geography. What began as a trickle of wooden boats has turned into a torrent of aeroplanes, oil tankers and giant cargo ships that criss-cross every ocean and bind every island and continent. Consequently the ecology of, say, Australia can no longer be understood without taking into account the European mammals or American microorganisms that flood its shores and deserts. Sheep, wheat, rats and flu viruses that humans brought to Australia during the last 300 years are today far more important to its ecology than the native kangaroos and koalas.
But the Anthropocene isn’t a novel phenomenon of the last few centuries. Already tens of thousands of years ago, when our Stone Age ancestors spread from East Africa to the four corners of the earth, they changed the flora and fauna of every continent and island on which they settled. They drove to extinction all the other human species of the world, 90 per cent of the large animals of Australia, 75 per cent of the large mammals of America and about 50 per cent of all the large land mammals of the planet – and all before they planted the first wheat field, shaped the first metal tool, wrote the first text or struck the first coin.6
Large animals were the main victims because they were relatively few, and they bred slowly. Compare, for example, mammoths (which became extinct) to rabbits (which survived). A troop of mammoths numbered no more than a few dozen individuals, and bred at a rate of perhaps just two youngsters per year. Hence if the local human tribe hunted just three mammoths a year, it would have been enough for deaths to outstrip births, and within a few generations the mammoths disappeared. Rabbits, in contrast, bred like rabbits. Even if humans hunted hundreds of rabbits each year, it was not enough to drive them to extinction.
Not that our ancestors planned on wiping out the mammoths; they were simply unaware of the consequences of their actions. The extinction of the mammoths and other large animals may have been swift on an evolutionary timescale, but slow and gradual in human terms. People lived no more than seventy or eighty years, whereas the extinction process took centuries. The ancient Sapiens probably failed to notice any connection between the annual mammoth hunt – in which no more than two or three mammoths were killed – and the disappearance of these furry giants. At most, a nostalgic elder might have told sceptical youngsters that ‘when I was young, mammoths were much more plentiful than these days. And so were mastodons and giant elks. And, of course, the tribal chiefs were honest, and children respected their elders.’
Anthropological and archaeological evidence indicates that archaic hunter-gatherers were probably animists: they believed that there was no essential gap separating humans from other animals. The world – i.e., the local valley and the surrounding mountain chains – belonged to all its inhabitants, and everyone followed a common set of rules. These rules involved ceaseless negotiation between all concerned beings. People talked with animals, trees and stones, as well as with fairies, demons and ghosts. Out of this web of communications emerged the values and norms that were binding on humans, elephants, oak trees and wraiths alike.7
The animist world view still guides some hunter-gatherer communities that have survived into the modern age. One of them is the Nayaka people, who live in the tropical forests of south India. The anthropologist Danny Naveh, who studied the Nayaka for several years, reports that when a Nayaka walking in the jungle encounters a dangerous animal such as a tiger, snake or elephant, he or she might address the animal and say: ‘You live in the forest. I too live here in the forest. You came here to eat, and I too came here to gather roots and tubers. I didn’t come to hurt you.’
A Nayaka was once killed by a male elephant they called ‘the elephant who always walks alone’. The Nayakas refused to help officials from the Indian forestry department capture him. They explained to Naveh that this elephant used to be very close to another male elephant, with whom he always roamed. One day the forestry department captured the second elephant, and since then ‘the elephant who always walks alone’ had become angry and violent. ‘How would you have felt if your spouse had been taken away from you? This is exactly how this elephant felt. These two elephants sometimes separated at night, each walking its own path . . . but in the morning they always came together again. On that day, the elephant saw his buddy falling, lying down. If two are always together and then you shoot one – how would the other feel?’8
Such an animistic attitude strikes many industrialised people as alien. Most of us automatically see animals as essentially different and inferior. This is because even our most ancient traditions were created thousands of years after the end of the hunter-gatherer era. The Old Testament, for example, was written down in the first millennium BC , and its oldest stories reflect the realities of the second millennium BC. But in the Middle East the age of the hunter-gatherers ended more than 7,000 years earlier. It is hardly surprising, therefore, that the Bible rejects animistic beliefs and its only animistic story appears right at the beginning, as a dire warning. The Bible is a long book, bursting with miracles, wonders and marvels. Yet the only time an animal initiates a conversation with a human is when the serpent tempts Eve to eat the forbidden fruit of knowledge (Bil’am’s donkey also speaks a few words, but she is merely conveying to Bil’am a message from God).
In the Garden of Eden, Adam and Eve lived as foragers. The expulsion from Eden bears a striking resemblance to the Agricultural Revolution. Instead of allowing Adam to keep gathering wild fruits, an angry God condemns him ‘to eat bread by the sweat of your brow’. It might be no coincidence, then, that biblical animals spoke with humans only in the pre-agricultural era of Eden. What lessons does the Bible draw from the episode? That you shouldn’t listen to snakes, and it is generally best to avoid talking with animals and plants. It leads to nothing but disaster.
Yet the biblical story has deeper and more ancient layers of meaning. In most Semitic languages, ‘Eve’ means ‘snake’ or even ‘female snake’. The name of our ancestral biblical mother hides an archaic animist myth, according to which snakes are not our enemies, but our ancestors.9 Many animist cultures believe that humans descended from animals, including from snakes and other reptiles. Most Australian Aborigines believed that the Rainbow Serpent created the world. The Aranda and Dieri people maintain that their particular tribes originated from primordial lizards or snakes, which were transformed into humans.10 In fact, modern Westerners too think that they have evolved from reptiles. The brain of each and every one of us is built around a reptilian core, and the structure of our bodies is essentially that of modified reptiles.
12. Paradise lost (the Sistine Chapel). The serpent – who sports a human upper body – initiates the entire chain of events. While the first two chapters of Genesis are dominated by divine monologues (‘and God said . . . and God said . . . and God said . . .’), in the third chapter we finally get a dialogue – between Eve and the serpent (‘and the serpent said unto the woman . . . and the woman said unto the serpent . . .’). This unique conversation between a human and an animal leads to the fall of humanity and our expulsion from Eden.
12. Detail from Michelangelo Buonarroti (1475–1564), the Sistine Chapel, Vatican City © Lessing Images.
The authors of the book of Genesis may have preserved a remnant of archaic animist beliefs in Eve’s name, but they took great care to conceal all other traces. Genesis says that, instead of descending from snakes, humans were divinely created from inanimate matter. The snake is not our progenitor: he seduces us to rebel against our heavenly Father. While animists saw humans as just another kind of animal, the Bible argues that humans are a unique creation, and any attempt to acknowledge the animal within us denies God’s power and authority. Indeed, when modern humans discovered that they actually evolved from reptiles, they rebelled against God and stopped listening to Him – or even believing in His existence.
The Bible, along with its belief in human distinctiveness, was one of the by-products of the Agricultural Revolution, which initiated a new phase in human–animal relations. The advent of farming produced new waves of mass extinctions, but more importantly, it created a completely new life form on earth: domesticated animals. Initially this development was of minor importance, since humans managed to domesticate fewer than twenty species of mammals and birds, compared to the countless thousands of species that remained ‘wild’. Yet with the passing of the centuries, this novel life form became dominant. Today more than 90 per cent of all large animals are domesticated.
Alas, domesticated species paid for their unparalleled collective success with unprecedented individual suffering. Although the animal kingdom has known many types of pain and misery for millions of years, the Agricultural Revolution generated completely new kinds of suffering that only became worse over time.
To the casual observer domesticated animals may seem much better off than their wild cousins and ancestors. Wild boars spend their days searching for food, water and shelter, and are constantly threatened by lions, parasites and floods. Domesticated pigs, in contrast, enjoy food, water and shelter provided by humans, who also treat their diseases and protect them against predators and natural disasters. True, most pigs sooner or later find themselves in the slaughterhouse. Yet does that make their fate any worse than the fate of wild boars? Is it better to be devoured by a lion than slaughtered by a man? Are crocodile teeth less deadly than steel blades?
What makes the fate of domesticated farm animals particularly harsh is not just the way they die, but above all the way they live. Two competing factors have shaped the living conditions of farm animals from ancient times to the present day: human desires and animal needs. Thus humans raise pigs in order to get meat, but if they want a steady supply of meat, they must ensure the long-term survival and reproduction of the pigs. Theoretically this should have protected the animals from extreme forms of cruelty. If a farmer did not take good care of his pigs, they would soon die without offspring and the farmer would starve.
Unfortunately, humans can cause tremendous suffering to farm animals in various ways, even while ensuring their survival and reproduction. The root of the problem is that domesticated animals have inherited from their wild ancestors many physical, emotional and social needs that are redundant on human farms. Farmers routinely ignore these needs, without paying any economic penalty. They lock animals in tiny cages, mutilate their horns and tails, separate mothers from offspring and selectively breed monstrosities. The animals suffer greatly, yet they live on and multiply.
Doesn’t that contradict the most basic principles of natural selection? The theory of evolution maintains that all instincts, drives and emotions have evolved in the sole interest of survival and reproduction. If so, doesn’t the continuous reproduction of farm animals prove that all their real needs are met? How can a pig have a ‘need’ that is not really needed for his survival and reproduction?
It is certainly true that all instincts, drives and emotions evolved in order to meet the evolutionary pressures of survival and reproduction. However, if and when these pressures suddenly disappear, the instincts, drives and emotions they had shaped do not disappear with them. At least not instantly. Even if they are no longer instrumental for survival and reproduction, these instincts, drives and emotions continue to mould the subjective experiences of the animal. For animals and humans alike, agriculture changed selection pressures almost overnight, but it did not change their physical, emotional and social drives. Of course evolution never stands still, and it has continued to modify humans and animals in the 12,000 years since the advent of farming. For example, humans in Europe and western Asia evolved the ability to digest cows’ milk, while cows lost their fear of humans, and today produce far more milk than their wild ancestors. Yet these are superficial alterations. The deep sensory and emotional structures of cows, pigs and humans alike haven’t changed much since the Stone Age.
Why do modern humans love sweets so much? Not because in the early twenty-first century we must gorge on ice cream and chocolate in order to survive. Rather, it is because when our Stone Age ancestors came across sweet fruit or honey, the most sensible thing to do was to eat as much of it as quickly as possible. Why do young men drive recklessly, get involved in violent arguments and hack confidential Internet sites? Because they are following ancient genetic decrees that might be useless and even counterproductive today, but that made good evolutionary sense 70,000 years ago. A young hunter who risked his life chasing a mammoth outshone all his competitors and won the hand of the local beauty, and we are now stuck with his macho genes.11
Exactly the same evolutionary logic shapes the lives of pigs, sows and piglets in human-controlled farms. In order to survive and reproduce in the wild, ancient boars needed to roam vast territories, familiarise themselves with their environment and beware of traps and predators. They further needed to communicate and cooperate with their fellow boars, forming complex groups dominated by old and experienced matriarchs. Evolutionary pressures consequently made wild boars – and even more so wild sows – highly intelligent social animals, characterised by a lively curiosity and strong urges to socialise, play, wander about and explore their surroundings. A sow born with some rare mutation that made her indifferent to her environment and to other boars was unlikely to survive or reproduce.
The descendants of wild boars – domesticated pigs – inherited their intelligence, curiosity and social skills.12 Like wild boars, domesticated pigs communicate using a rich variety of vocal and olfactory signals: mother sows recognise the unique squeaks of their piglets, whereas two-day-old piglets already differentiate their mother’s calls from those of other sows.13 Professor Stanley Curtis of the Pennsylvania State University trained two pigs – named Hamlet and Omelette – to control a special joystick with their snouts, and found that the pigs soon rivalled primates in learning and playing simple computer games.14
Today most sows in industrial farms don’t play computer games. They are locked by their human masters in tiny gestation crates, usually measuring six and a half by two feet. The crates have a concrete floor and metal bars, and hardly allow the pregnant sows even to turn around or sleep on their side, never mind walk. After three and a half months in such conditions, the sows are moved to slightly wider crates, where they give birth and nurse their piglets. Whereas piglets would naturally suckle for ten to twenty weeks, in industrial farms they are forcibly weaned within two to four weeks, separated from their mother and shipped to be fattened and slaughtered. The mother is immediately impregnated again, and sent back to the gestation crate to start another cycle. The typical sow would go through five to ten such cycles before being slaughtered herself. In recent years the use of crates has been restricted in the European Union and some US states, but the crates are still commonly used in many other countries, and tens of millions of breeding sows pass almost their entire lives in them.
The human farmers take care of everything the sow needs in order to survive and reproduce. She is given enough food, vaccinated against diseases, protected against the elements and artificially inseminated. From an objective perspective, the sow no longer needs to explore her surroundings, socialise with other pigs, bond with her piglets or even walk. But from a subjective perspective, the sow still feels very strong urges to do all of these things, and if these urges are not fulfilled she suffers greatly. Sows locked in gestation crates typically display acute frustration alternating with extreme despair.15
This is the basic lesson of evolutionary psychology: a need shaped thousands of generations ago continues to be felt subjectively even if it is no longer necessary for survival and reproduction in the present. Tragically, the Agricultural Revolution gave humans the power to ensure the survival and reproduction of domesticated animals while ignoring their subjective needs.
13. Sows confined in gestation crates. These highly social and intelligent beings spend most of their lives in this condition, as if they were already sausages.
13. © Balint Porneczi/Bloomberg via Getty Images.
How can we be sure that animals such as pigs actually have a subjective world of needs, sensations and emotions? Aren’t we guilty of humanising animals, i.e., ascribing human qualities to non-human entities, like children believing that dolls feel love and anger?
In fact, attributing emotions to pigs doesn’t humanise them. It ‘mammalises’ them. For emotions are not a uniquely human quality – they are common to all mammals (as well as to all birds and probably to some reptiles and even fish). All mammals evolved emotional abilities and needs, and from the fact that pigs are mammals we can safely deduce that they have emotions.16
In recent decades life scientists have demonstrated that emotions are not some mysterious spiritual phenomenon that is useful just for writing poetry and composing symphonies. Rather, emotions are biochemical algorithms that are vital for the survival and reproduction of all mammals. What does this mean? Well, let’s begin by explaining what an algorithm is. This is of great importance not only because this key concept will reappear in many of the following chapters, but also because the twenty-first century will be dominated by algorithms. ‘Algorithm’ is arguably the single most important concept in our world. If we want to understand our life and our future, we should make every effort to understand what an algorithm is, and how algorithms are connected with emotions.
An algorithm is a methodical set of steps that can be used to make calculations, resolve problems and reach decisions. An algorithm isn’t a particular calculation, but the method followed when making the calculation. For example, if you want to calculate the average between two numbers, you can use a simple algorithm. The algorithm says: ‘First step: add the two numbers together. Second step: divide the sum by two.’ When you enter the numbers 4 and 8, you get 6. When you enter 117 and 231, you get 174.
A more complex example is a cooking recipe. An algorithm for preparing vegetable soup may tell us:
1. Heat half a cup of oil in a pot.
2. Finely chop four onions.
3. Fry the onion until golden.
4. Cut three potatoes into chunks and add to the pot.
5. Slice a cabbage into strips and add to the pot.
And so forth. You can follow the same algorithm dozens of times, each time using slightly different vegetables, and therefore getting a slightly different soup. But the algorithm remains the same.
A recipe by itself cannot make soup. You need a person to read the recipe and follow the prescribed set of steps. But you can build a machine that embodies this algorithm and follows it automatically. Then you just need to provide the machine with water, electricity and vegetables – and it will prepare the soup by itself. There aren’t many soup machines around, but you are probably familiar with beverage vending machines. Such machines usually have a slot for coins, an opening for cups, and rows of buttons. The first row has buttons for coffee, tea and cocoa. The second row is marked: no sugar, one spoon of sugar, two spoons of sugar. The third row indicates milk, soya milk, no milk. A man approaches the machine, inserts a coin into the slot and presses the buttons marked ‘tea’, ‘one sugar’ and ‘milk’. The machine kicks into action, following a precise set of steps. It drops a tea bag into a cup, pours boiling water, adds a spoonful of sugar and milk, and ding! A nice cup of tea emerges. This is an algorithm.17
Over the last few decades biologists have reached the firm conclusion that the man pressing the buttons and drinking the tea is also an algorithm. A much more complicated algorithm than the vending machine, no doubt, but still an algorithm. Humans are algorithms that produce not cups of tea, but copies of themselves (like a vending machine which, if you press the right combination of buttons, produces another vending machine).
The algorithms controlling vending machines work through mechanical gears and electric circuits. The algorithms controlling humans work through sensations, emotions and thoughts. And exactly the same kind of algorithms control pigs, baboons, otters and chickens. Consider, for example, the following survival problem: a baboon spots some bananas hanging on a tree, but also notices a lion lurking nearby. Should the baboon risk his life for those bananas?
This boils down to a mathematical problem of calculating probabilities: the probability that the baboon will die of hunger if he does not eat the bananas, versus the probability that the lion will catch the baboon. In order to solve this problem the baboon needs to take into account a lot of data. How far am I from the bananas? How far away is the lion? How fast can I run? How fast can the lion run? Is the lion awake or asleep? Does the lion seem to be hungry or satiated? How many bananas are there? Are they big or small? Green or ripe? In addition to these external data, the baboon must also consider information about conditions within his own body. If he is starving, it makes sense to risk everything for those bananas, no matter the odds. In contrast, if he has just eaten, and the bananas are mere greed, why take any risks at all?
In order to weigh and balance all these variables and probabilities, the baboon requires far more complicated algorithms than the ones controlling automatic vending machines. The prize for making correct calculations is correspondingly greater. The prize is the very survival of the baboon. A timid baboon – one whose algorithms overestimate dangers – will starve to death, and the genes that shaped these cowardly algorithms will perish with him. A rash baboon – one whose algorithms underestimate dangers – will fall prey to the lion, and his reckless genes will also fail to make it to the next generation. These algorithms undergo constant quality control by natural selection. Only animals that calculate probabilities correctly leave offspring behind.
Yet this is all very abstract. How exactly does a baboon calculate probabilities? He certainly doesn’t draw a pencil from behind his ear, a notebook from a back pocket, and start computing running speeds and energy levels with a calculator. Rather, the baboon’s entire body is the calculator. What we call sensations and emotions are in fact algorithms. The baboon feels hunger, he feels fear and trembling at the sight of the lion, and he feels his mouth watering at the sight of the bananas. Within a split second, he experiences a storm of sensations, emotions and desires, which is nothing but the process of calculation. The result will appear as a feeling: the baboon will suddenly feel his spirit rising, his hairs standing on end, his muscles tensing, his chest expanding, and he will inhale a big breath, and ‘Forward! I can do it! To the bananas!’ Alternatively, he may be overcome by fear, his shoulders will droop, his stomach will turn, his legs will give way, and ‘Mama! A lion! Help!’ Sometimes the probabilities match so evenly that it is hard to decide. This too will manifest itself as a feeling. The baboon will feel confused and indecisive. ‘Yes . . . No . . . Yes . . . No . . . Damn! I don’t know what to do!’
In order to transmit genes to the next generation, it is not enough to solve survival problems. Animals also need to solve reproduction problems too, and this depends on calculating probabilities. Natural selection evolved passion and disgust as quick algorithms for evaluating reproduction odds. Beauty means ‘good chances for having successful offspring’. When a woman sees a man and thinks, ‘Wow! He is gorgeous!’ and when a peahen sees a peacock and thinks, ‘Jesus! What a tail!’ they are doing something similar to the automatic vending machine. As light reflected from the male’s body hits their retinas, extremely powerful algorithms honed by millions of years of evolution kick in. Within a few milliseconds the algorithms convert tiny cues in the male’s external appearance into reproduction probabilities, and reach the conclusion: ‘In all likelihood, this is a very healthy and fertile male, with excellent genes. If I mate with him, my offspring are also likely to enjoy good health and excellent genes.’ Of course, this conclusion is not spelled out in words or numbers, but in the fiery itch of sexual attraction. Peahens, and most women, don’t make such calculations with pen and paper. They just feel them.
Even Nobel laureates in economics make only a tiny fraction of their decisions using pen, paper and calculator; 99 per cent of our decisions – including the most important life choices concerning spouses, careers and habitats – are made by the highly refined algorithms we call sensations, emotions and desires.18
Because these algorithms control the lives of all mammals and birds (and probably some reptiles and even fish), when humans, baboons and pigs feel fear, similar neurological processes take place in similar brain areas. It is therefore likely that frightened humans, frightened baboons and frightened pigs have similar experiences.19
There are differences too, of course. Pigs don’t seem to experience the extremes of compassion and cruelty that characterise Homo sapiens, nor the sense of wonder that overwhelms a human gazing up at the infinitude of a starry sky. It is likely that there are also opposite examples, of swinish emotions unfamiliar to humans, but I cannot name any, for obvious reasons. However, one core emotion is apparently shared by all mammals: the mother–infant bond. Indeed, it gives mammals their name. The word ‘mammal’ comes from the Latin mamma, meaning breast. Mammal mothers love their offspring so much that they allow them to suckle from their body. Mammal youngsters, on their side, feel an overwhelming desire to bond with their mothers and stay near them. In the wild, piglets, calves and puppies that fail to bond with their mothers rarely survive for long. Until recently that was true of human children too. Conversely, a sow, cow or bitch that due to some rare mutation does not care about her young may live a long and comfortable life, but her genes will not pass to the next generation. The same logic is true among giraffes, bats, whales and porcupines. We can argue about other emotions, but since mammal youngsters cannot survive without motherly care, it is evident that motherly love and a strong mother–infant bond characterise all mammals.20
14. A peacock and a man. When you look at these images, data on proportions, colours and sizes gets processed by your biochemical algorithms, causing you to feel attraction, repulsion or indifference.
14. Left: © Bergserg/Shutterstock.com. Right: © s_bukley/Shutterstock.com.
It took scientists many years to acknowledge this. Not long ago psychologists doubted the importance of the emotional bond between parents and children even among humans. In the first half of the twentieth century, and despite the influence of Freudian theories, the dominant behaviourist school argued that relations between parents and children were shaped by material feedback; that children needed mainly food, shelter and medical care; and that children bonded with their parents simply because the latter provide these material needs. Children who demanded warmth, hugs and kisses were thought to be ‘spoiled’. Childcare experts warned that children who were hugged and kissed by their parents would grow up to be needy, egotistical and insecure adults.21
John Watson, a leading childcare authority in the 1920s, sternly advised parents, ‘Never hug and kiss [your children], never let them sit in your lap. If you must, kiss them once on the forehead when they say goodnight. Shake hands with them in the morning.’22 The popular magazine Infant Care explained that the secret of raising children is to maintain discipline and to provide the children’s material needs according to a strict daily schedule. A 1929 article instructed parents that if an infant cries out for food before the normal feeding time, ‘Do not hold him, nor rock him to stop his crying, and do not nurse him until the exact hour for the feeding comes. It will not hurt the baby, even the tiny baby, to cry.’23
Only in the 1950s and 1960s did a growing consensus of experts abandon these strict behaviourist theories and acknowledge the central importance of emotional needs. In a series of famous (and shockingly cruel) experiments, the psychologist Harry Harlow separated infant monkeys from their mothers shortly after birth, and isolated them in small cages. When given a choice between a metal dummy-mother fitted with a milk bottle, and a soft cloth-covered dummy with no milk, the baby monkeys clung to the barren cloth mother for all they were worth.
Those baby monkeys knew something that John Watson and the experts of Infant Care failed to realise: mammals can’t live on food alone. They need emotional bonds too. Millions of years of evolution preprogrammed the monkeys with an overwhelming desire for emotional bonding. Evolution also imprinted them with the assumption that emotional bonds are more likely to be formed with soft furry things than with hard and metallic objects. (This is also why small human children are far more likely to become attached to dolls, blankets and smelly rags than to cutlery, stones or wooden blocks.) The need for emotional bonds is so strong that Harlow’s baby monkeys abandoned the nourishing metal dummy and turned their attention to the only object that seemed capable of answering that need. Alas, the cloth-mother never responded to their affection and the little monkeys consequently suffered from severe psychological and social problems, and grew up to be neurotic and asocial adults.
Today we look back with incomprehension at early twentieth-century child-rearing advice. How could experts fail to appreciate that children have emotional needs, and that their mental and physical health depends as much on providing for these needs as on food, shelter and medicines? Yet when it comes to other mammals we keep denying the obvious. Like John Watson and the Infant Care experts, farmers throughout history took care of the material needs of piglets, calves and kids, but tended to ignore their emotional needs. Thus both the meat and dairy industries are based on breaking the most fundamental emotional bond in the mammal kingdom. Farmers get their breeding sows and dairy cows impregnated again and again. Yet the piglets and calves are separated from their mother shortly after birth, and often pass their days without ever sucking at her teats or feeling the warm touch of her tongue and body. What Harry Harlow did to a few hundred monkeys, the meat and dairy industries are doing to billions of animals every year.24
How did farmers justify their behaviour? Whereas hunter-gatherers were seldom aware of the damage they inflicted on the ecosystem, farmers knew perfectly well what they were doing. They knew they were exploiting domesticated animals and subjugating them to human desires and whims. They justified their actions in the name of new theist religions, which mushroomed and spread in the wake of the Agricultural Revolution. Theist religions began to argue that the universe is not a parliament of beings, but rather a theocracy ruled by a group of great gods – or perhaps by a single capital ‘G’ God (‘Theos’ in Greek). We don’t normally associate this idea with agriculture, but at least in their beginnings theist religions were an agricultural enterprise. The theology, mythology and liturgy of religions such as Judaism, Hinduism and Christianity revolved at first around the relationship between humans, domesticated plants and farm animals.25
Biblical Judaism, for instance, catered to peasants and shepherds. Most of its commandments dealt with farming and village life, and its major holidays were harvest festivals. People today imagine the ancient temple in Jerusalem as a kind of big synagogue where priests clad in snow-white robes welcomed devout pilgrims, melodious choirs sang psalms and incense perfumed the air. In reality, it looked more like a cross between a slaughterhouse and a barbecue joint. The pilgrims did not come empty-handed. They brought with them a never-ending stream of sheep, goats, chickens and other animals, which were sacrificed at the god’s altar and then cooked and eaten. The psalm-singing choirs could hardly be heard over the bellowing and bleating of calves and kids. Priests in bloodstained outfits cut the victims’ throats, collected the gushing blood in jars and spilled it over the altar. The perfume of incense mixed with the odours of congealed blood and roasted meat, while swarms of black flies buzzed just about everywhere (see, for example, Numbers 28, Deuteronomy 12, and 1 Samuel 2). A modern Jewish family that celebrates a holiday by having a barbecue on their front lawn is much closer to the spirit of biblical times than an orthodox family that spends the time studying scriptures in a synagogue.
Theist religions, such as biblical Judaism, justified the agricultural economy through new cosmological myths. Animist religions had previously depicted the universe as a grand Chinese opera with a limitless cast of colourful actors. Elephants and oak trees, crocodiles and rivers, mountains and frogs, ghosts and fairies, angels and demons – each had a role in the cosmic opera. Theist religions rewrote the script, turning the universe into a bleak Ibsen drama with just two main characters: man and God. The angels and demons somehow survived the transition, becoming the messengers and servants of the great gods. Yet the rest of the animist cast – all the animals, plants and other natural phenomena – were transformed into silent decor. True, some animals were considered sacred to this or that god, and many gods had animal features: the Egyptian god Anubis sported the head of a jackal, and even Jesus Christ was frequently depicted as a lamb. Yet ancient Egyptians could easily tell the difference between Anubis and an ordinary jackal sneaking into the village to hunt chickens, and no Christian butcher ever mistook the lamb under his knife for Jesus.
We normally think that theist religions sanctified the great gods. We tend to forget that they sanctified humans, too. Hitherto Homo sapiens had been just one actor in a cast of thousands. In the new theist drama Sapiens became the central hero around whom the entire universe revolved.
The gods, meanwhile, were given two related roles to play. Firstly, they explained what is so special about Sapiens and why humans should dominate and exploit all other organisms. Christianity, for example, maintained that humans hold sway over the rest of creation because the Creator charged them with that authority. Moreover, according to Christianity, God gave an eternal soul only to humans. Since the fate of this eternal soul is the point of the whole Christian cosmos, and since animals have no soul, they are mere extras. Humans thus became the apex of creation, while all other organisms were pushed to the sidelines.
Secondly, the gods had to mediate between humans and the ecosystem. In the animistic cosmos, everyone talked with everyone directly. If you needed something from the caribou, the fig trees, the clouds or the rocks, you addressed them yourself. In the theist cosmos, all non-human entities were silenced. Consequently you could no longer talk with trees and animals. What to do, then, when you wanted the trees to give more fruits, the cows to give more milk, the clouds to bring more rain and the locusts to stay away from your crops? That’s where the gods entered the picture. They promised to supply rain, fertility and protection, provided humans did something in return. This was the essence of the agricultural deal. The gods safeguarded and multiplied farm production, and in exchange humans had to share the produce with the gods. This deal served both parties, at the expense of the rest of the ecosystem.
Today in Nepal, devotees of the goddess Gadhimai celebrate her festival every five years in the village of Bariyapur. A record was set in 2009 when 250,000 animals were sacrificed to the goddess. A local driver explained to a visiting British journalist that ‘If we want anything, and we come here with an offering to the goddess, within five years all our dreams will be fulfilled.’26
Much of theist mythology explains the subtle details of this deal. The Mesopotamian Gilgamesh epic recounts that when the gods sent a great deluge to destroy the world, almost all humans and animals perished. Only then did the rash gods realise that nobody remained to make any offerings to them. They became crazed with hunger and distress. Luckily, one human family survived, thanks to the foresight of the god Enki, who instructed his devotee Utnapishtim to take shelter in a large wooden ark along with his relatives and a menagerie of animals. When the deluge subsided and this Mesopotamian Noah emerged from his ark, the first thing he did was sacrifice some animals to the gods. Then, tells the epic, all the great gods rushed to the spot: ‘The gods smelled the savour / the gods smelled the sweet savour / the gods swarmed like flies around the offering.’27 The biblical story of the deluge (written more than 1,000 years after the Mesopotamian version) also reports that immediately upon leaving the ark, ‘Noah built an altar to the Lord and, taking some of the clean animals and clean birds, he sacrificed burnt offerings on it. The Lord smelled the pleasing aroma and said in his heart: Never again will I curse the ground because of humans’ (Genesis 8:20–1).
This deluge story became a founding myth of the agricultural world. It is possible of course to give it a modern environmentalist spin. The deluge could teach us that our actions can ruin the entire ecosystem, and humans are divinely charged with protecting the rest of creation. Yet traditional interpretations saw the deluge as proof of human supremacy and animal worthlessness. According to these interpretations, Noah was instructed to save the whole ecosystem in order to protect the common interests of gods and humans rather than the interests of the animals. Non-human organisms have no intrinsic value; they exist solely for our sake.
After all, when ‘the Lord saw how great the wickedness of the human race had become’ He resolved to ‘wipe from the face of the earth the human race I have created – and with them the animals, the birds and the creatures that move along the ground – for I regret that I have made them’ (Genesis 6:7). The Bible thinks it is perfectly all right to destroy all animals as punishment for the crimes of Homo sapiens, as if the existence of giraffes, pelicans and ladybirds has lost all purpose if humans misbehave. The Bible could not imagine a scenario in which God repents having created Homo sapiens, wipes this sinful ape off the face of the earth, and then spends eternity enjoying the antics of ostriches, kangaroos and panda bears.
Theist religions nevertheless have certain animal-friendly beliefs. The gods gave humans authority over the animal kingdom, but this authority carried with it some responsibilities. For example, Jews were commanded to allow farm animals to rest on the Sabbath, and to avoid causing them unnecessary suffering. (Though whenever interests clashed, human interests always trumped animal interests.28)
A Talmudic tale recounts how on the way to the slaughterhouse, a calf escaped and sought refuge with Rabbi Yehuda HaNasi, one of the founders of rabbinical Judaism. The calf tucked his head under the rabbi’s flowing robes and started crying. Yet the rabbi pushed the calf away, saying, ‘Go. You were created for that very purpose.’ Since the rabbi showed no mercy, God punished him, and he suffered from a painful illness for thirteen years. Then, one day, a servant cleaning the rabbi’s house found some newborn rats and began sweeping them out. Rabbi Yehuda rushed to save the helpless creatures, instructing the servant to leave them in peace, because ‘God is good to all, and has compassion on all he has made’ (Psalms 145:9). Since the rabbi showed compassion to these rats, God showed compassion to the rabbi, and he was cured of his illness.29
Other religions, particularly Jainism, Buddhism and Hinduism, have demonstrated even greater empathy to animals. They emphasise the connection between humans and the rest of the ecosystem, and their foremost ethical commandment has been to avoid killing any living being. Whereas the biblical ‘Thou shalt not kill’ covered only humans, the ancient Indian principle of ahimsa (non-violence) extends to every sentient being. Jain monks are particularly careful in this regard. They always cover their mouths with a white cloth, lest they inhale an insect, and whenever they walk they carry a broom to gently sweep any ant or beetle from their path.30
Nevertheless, all agricultural religions – Jainism, Buddhism and Hinduism included – found ways to justify human superiority and the exploitation of animals (if not for meat, then for milk and muscle power). They have all claimed that a natural hierarchy of beings entitles humans to control and use other animals, provided that the humans observe certain restrictions. Hinduism, for example, has sanctified cows and forbidden eating beef, but has also provided the ultimate justification for the dairy industry, alleging that cows are generous creatures that positively yearn to share their milk with humankind.
Humans thus committed themselves to an ‘agricultural deal’. According to this deal, cosmic forces gave humans command over other animals, on condition that humans fulfilled certain obligations towards the gods, towards nature and towards the animals themselves. It was easy to believe in the existence of such a cosmic compact, because it reflected the daily routine of farming life.
Hunter-gatherers had not seen themselves as superior beings because they were seldom aware of their impact on the ecosystem. A typical band numbered in the dozens, it was surrounded by thousands of wild animals, and its survival depended on understanding and respecting the desires of these animals. Foragers had to constantly ask themselves what deer dream about, and what lions think. Otherwise, they could not hunt the deer, nor escape the lions.
Farmers, in contrast, lived in a world controlled and shaped by human dreams and thoughts. Humans were still subject to formidable natural forces such as storms and earthquakes, but they were far less dependent on the wishes of other animals. A farm boy learned early on to ride a horse, harness a bull, whip a stubborn donkey and lead the sheep to pasture. It was easy and tempting to believe that such everyday activities reflected either the natural order of things or the will of heaven.
The Agricultural Revolution was thus both an economic and a religious revolution. New kinds of economic relations emerged together with new kinds of religious beliefs that justified the brutal exploitation of animals. This ancient process can be witnessed even today whenever the last remaining hunter-gatherer communities adopt farming. In recent years the Nayaka hunter-gatherers of south India have taken up some agricultural practices such as herding cattle, raising chickens and cultivating tea. Not surprisingly, they have also picked up new attitudes towards animals, and they espouse very different views about domesticated animals (and plants) compared with wild organisms.
In the Nayaka language a living being possessing a unique personality is called mansan. When probed by the anthropologist Danny Naveh, the Nayaka explained that all elephants are mansan. ‘We live in the forest, they live in the forest. We are all mansan . . . So are bears, deer and tigers. All forest animals.’ What about cows? ‘Cows are different. You have to lead them everywhere.’ And chickens? ‘They are nothing. They are not mansan.’ And forest trees? ‘Yes – they live for such a long time.’ And tea bushes? ‘Oh, these I cultivate so that I can sell the tea leaves and buy what I need from the store. No, they aren’t mansan.’31
The degradation of animals from sentient beings deserving of respect into mere property rarely stopped with cows and chickens. Most agricultural societies began treating various classes of people as if they too were property. In ancient Egypt, biblical Israel and medieval China it was common to enslave humans, torture them and execute them for even trifling offences. Just as peasants did not consult with cows and chickens about the running of the farm, so rulers did not dream of asking peasants for their opinions about running the kingdom. And when ethnic groups or religious communities clashed, they frequently dehumanized each other. Depicting the ‘others’ as subhuman beasts was a first step towards treating them as such. The farm thus became the prototype of new societies, complete with puffed-up masters, inferior races fit for exploitation, wild beasts ripe for extermination and a great God above that gives His blessing to the entire arrangement.
The rise of modern science and industry brought about the next revolution in human–animal relations. During the Agricultural Revolution humankind silenced animals and plants, and turned the animist grand opera into a dialogue between man and gods. During the Scientific Revolution humankind silenced the gods too. The world was now a one-man show. Humankind stood alone on an empty stage, talking to itself, negotiating with no one and acquiring enormous powers without any obligations. Having deciphered the mute laws of physics, chemistry and biology, humankind now does with them as it pleases.
When an archaic hunter went out to the savannah, he asked the help of the wild bull, and the bull demanded something of the hunter. When an ancient farmer wanted his cows to produce lots of milk, he asked some great heavenly god for help, and the god stipulated his conditions. When the white-coated staff in Nestlé’s Research and Development department want to increase dairy production, they study genetics – and the genes don’t ask for anything in return.
But just as the hunters and farmers had their myths, so do the people in the R&D department. Their most famous myth shamelessly plagiarises the legend of the Tree of Knowledge and the Garden of Eden, but transports the action to the garden at Woolsthorpe Manor in Lincolnshire. According to this myth, Isaac Newton was sitting there under an apple tree when a ripe apple dropped on his head. Newton began wondering why the apple fell straight downwards, rather than sideways or upwards. His enquiry led him to discover gravity and the laws of Newtonian mechanics.
Newton’s story turns the Tree of Knowledge myth on its head. In the Garden of Eden the serpent initiates the drama, tempting humans to sin, thereby bringing the wrath of God down upon them. Adam and Eve are a plaything for serpent and God alike. In contrast, in the Garden of Woolsthorpe man is the sole agent. Though Newton himself was a deeply religious Christian who devoted far more time to studying the Bible than the laws of physics, the Scientific Revolution that he helped launch pushed God to the sidelines. When Newton’s successors came to write their Genesis myth, they had no use for either God or serpent. The Garden of Woolsthorpe is run by blind laws of nature, and the initiative to decipher these laws is strictly human. The story may begin with an apple falling on Newton’s head, but the apple did not do it on purpose.
In the Garden of Eden myth, humans are punished for their curiosity and for their wish to gain knowledge. God expels them from Paradise. In the Garden of Woolsthorpe myth, nobody punishes Newton – just the opposite. Thanks to his curiosity humankind gains a better understanding of the universe, becomes more powerful and takes another step towards the technological paradise. Untold numbers of teachers throughout the world recount the Newton myth to encourage curiosity, implying that if only we gain enough knowledge, we can create paradise here on earth.
In fact, God is present even in the Newton myth: Newton himself is God. When biotechnology, nanotechnology and the other fruits of science ripen, Homo sapiens will attain divine powers and come full circle back to the biblical Tree of Knowledge. Archaic hunter-gatherers were just another species of animal. Farmers saw themselves as the apex of creation. Scientists will upgrade us into gods.
Whereas the Agricultural Revolution gave rise to theist religions, the Scientific Revolution gave birth to humanist religions, in which humans replaced gods. While theists worship theos (Greek for ‘god’), humanists worship humans. The founding idea of humanist religions such as liberalism, communism and Nazism is that Homo sapiens has some unique and sacred essence that is the source of all meaning and authority in the universe. Everything that happens in the cosmos is judged to be good or bad according to its impact on Homo sapiens.
Whereas theism justified traditional agriculture in the name of God, humanism has justified modern industrial farming in the name of Man. Industrial farming sanctifies human needs, whims and wishes, while disregarding everything else. Industrial farming has no real interest in animals, which don’t share the sanctity of human nature. And it has no use for gods, because modern science and technology give humans powers that far exceed those of the ancient gods. Science enables modern firms to subjugate cows, pigs and chickens to more extreme conditions than those prevailing in traditional agricultural societies.
In ancient Egypt, in the Roman Empire or in medieval China, humans had only a rudimental understanding of biochemistry, genetics, zoology and epidemiology. Consequently, their powers of manipulation were limited. In those days, pigs, cows and chickens ran free among the houses, and searched for edible treasures in the rubbish heap and in the nearby woods. If an ambitious peasant had tried to confine thousands of animals in a crowded coop, a deadly epidemic would probably have resulted, wiping out all the animals as well as many of the villagers. No priest, shaman or god could have prevented it.
But once modern science deciphered the secrets of epidemics, pathogens and antibiotics, industrial coops, pens and pigsties became feasible. With the help of vaccinations, medications, hormones, pesticides, central air-conditioning systems and automatic feeders, it is now possible to pack tens of thousands of pigs, cows or chickens into neat rows of cramped cages, and produce meat, milk and eggs with unprecedented efficiency.
In recent years, as people began to rethink human–animal relations, such practices have come under increasing criticism. We are suddenly showing unprecedented interest in the fate of so-called lower life forms, perhaps because we are about to become one. If and when computer programs attain superhuman intelligence and unprecedented power, should we begin valuing these programs more than we value humans? Would it be okay, for example, for an artificial intelligence to exploit humans and even kill them to further its own needs and desires? If it should never be allowed to do that, despite its superior intelligence and power, why is it ethical for humans to exploit and kill pigs? Do humans have some magical spark, in addition to higher intelligence and greater power, which distinguishes them from pigs, chickens, chimpanzees and computer programs alike? If yes, where did that spark come from, and why are we certain that an AI could never acquire it? If there is no such spark, would there be any reason to continue assigning special value to human life even after computers surpass humans in intelligence and power? Indeed, what exactly is it about humans that make us so intelligent and powerful in the first place, and how likely is it that non-human entities will ever rival and surpass us?
The next chapter will examine the nature and power of Homo sapiens, not only in order to comprehend further our relations with other animals, but also to appreciate what the future might hold for us, and what relations between humans and superhumans might look like.
There is no doubt that Homo sapiens is the most powerful species in the world. Homo sapiens also likes to think that it enjoys a superior moral status, and that human life has much greater value than the lives of pigs, elephants or wolves. This is less obvious. Does might make right? Is human life more precious than porcine life simply because the human collective is more powerful than the pig collective? The United States is far mightier than Afghanistan; does this imply that American lives have greater intrinsic value than Afghan lives?
In practice, American lives are more valued. Far more money is invested in the education, health and safety of the average American than of the average Afghan. Killing an American citizen creates a far greater international outcry than killing an Afghan citizen. Yet it is generally accepted that this is no more than an unjust result of the geopolitical balance of power. Afghanistan may have far less clout than the USA, yet the life of a child in the mountains of Tora Bora is considered every bit as sacred as the life of a child in Beverly Hills.
In contrast, when we privilege human children over piglets, we want to believe that this reflects something deeper than the ecological balance of power. We want to believe that human lives really are superior in some fundamental way. We Sapiens love telling ourselves that we enjoy some magical quality that not only accounts for our immense power, but also gives moral justification for our privileged status. What is this unique human spark?
The traditional monotheist answer is that only Sapiens have eternal souls. Whereas the body decays and rots, the soul journeys on towards salvation or damnation, and will experience either everlasting joy in paradise or an eternity of misery in hell. Since pigs and other animals have no soul, they don’t take part in this cosmic drama. They live only for a few years, and then die and fade into nothingness. We should therefore care far more about eternal human souls than about ephemeral pigs.
This is no kindergarten fairy tale, but an extremely powerful myth that continues to shape the lives of billions of humans and animals in the early twenty-first century. The belief that humans have eternal souls whereas animals are just evanescent bodies is a central pillar of our legal, political and economic system. It explains why, for example, it is perfectly okay for humans to kill animals for food, or even just for the fun of it.
However, our latest scientific discoveries flatly contradict this monotheist myth. True, laboratory experiments confirm the accuracy of one part of the myth: just as monotheist religions say, animals have no souls. All the careful studies and painstaking examinations have failed to discover any trace of a soul in pigs, rats or rhesus monkeys. Alas, the same laboratory experiments undermine the second and far more important part of the monotheist myth, namely, that humans do have a soul. Scientists have subjected Homo sapiens to tens of thousands of bizarre experiments, and looked into every nook in our hearts and every cranny in our brains. But they have so far discovered no magical spark. There is zero scientific evidence that in contrast to pigs, Sapiens have souls.
If that were all, we could well argue that scientists just need to keep looking. If they haven’t found the soul yet, it is because they haven’t looked carefully enough. Yet the life sciences doubt the existence of soul not just due to lack of evidence, but rather because the very idea of soul contradicts the most fundamental principles of evolution. This contradiction is responsible for the unbridled hatred that the theory of evolution inspires among devout monotheists.
According to a 2012 Gallup survey, only 15 per cent of Americans think that Homo sapiens evolved through natural selection alone, free of all divine intervention; 32 per cent maintain that humans may have evolved from earlier life forms in a process lasting millions of years, but God orchestrated this entire show; 46 per cent believe that God created humans in their current form sometime during the last 10,000 years, just as the Bible says. Spending three years in college has absolutely no impact on these views. The same survey found that among BA graduates, 46 per cent believe in the biblical creation story, whereas only 14 per cent think that humans evolved without any divine supervision. Even among holders of MA and PhD degrees, 25 per cent believe the Bible, whereas only 29 per cent credit natural selection alone with the creation of our species.1
Though schools evidently do a very poor job teaching evolution, religious zealots still insist that it should not be taught at all. Alternatively, they demand that children must also be taught the theory of intelligent design, according to which all organisms were created by the design of some higher intelligence (aka God). ‘Teach them both theories,’ say the zealots, ‘and let the kids decide for themselves.’
Why does the theory of evolution provoke such objections, whereas nobody seems to care about the theory of relativity or quantum mechanics? How come politicians don’t ask that kids be exposed to alternative theories about matter, energy, space and time? After all, Darwin’s ideas seem at first sight far less threatening than the monstrosities of Einstein and Werner Heisenberg. The theory of evolution rests on the principle of the survival of the fittest, which is a clear and simple – not to say humdrum – idea. In contrast, the theory of relativity and quantum mechanics argue that you can twist time and space, that something can appear out of nothing, and that a cat can be both alive and dead at the same time. This makes a mockery of our common sense, yet nobody seeks to protect innocent schoolchildren from these scandalous ideas. Why?
The theory of relativity makes nobody angry, because it doesn’t contradict any of our cherished beliefs. Most people don’t care an iota whether space and time are absolute or relative. If you think it is possible to bend space and time, well, be my guest. Go ahead and bend them. What do I care? In contrast, Darwin has deprived us of our souls. If you really understand the theory of evolution, you understand that there is no soul. This is a terrifying thought not only to devout Christians and Muslims, but also to many secular people who don’t hold any clear religious dogma, but nevertheless want to believe that each human possesses an eternal individual essence that remains unchanged throughout life, and can survive even death intact.
The literal meaning of the word ‘individual’ is ‘something that cannot be divided’. That I am an ‘in-dividual’ implies that my true self is a holistic entity rather than an assemblage of separate parts. This indivisible essence allegedly endures from one moment to the next without losing or absorbing anything. My body and brain undergo a constant process of change, as neurons fire, hormones flow and muscles contract. My personality, wishes and relationships never stand still, and may be completely transformed over years and decades. But underneath it all I remain the same person from birth to death – and hopefully beyond death as well.
Unfortunately, the theory of evolution rejects the idea that my true self is some indivisible, immutable and potentially eternal essence. According to the theory of evolution, all biological entities – from elephants and oak trees to cells and DNA molecules – are composed of smaller and simpler parts that ceaselessly combine and separate. Elephants and cells have evolved gradually, as a result of new combinations and splits. Something that cannot be divided or changed cannot have come into existence through natural selection.
The human eye, for example, is an extremely complex system made of numerous smaller parts such as the lens, the cornea and the retina. The eye did not pop out of nowhere complete with all these components. Rather, it evolved step by tiny step through millions of years. Our eye is very similar to the eye of Homo erectus, who lived 1 million years ago. It is somewhat less similar to the eye of Australopithecus, who lived 5 million years ago. It is very different from the eye of Dryolestes, who lived 150 million years ago. And it seems to have nothing in common with the unicellular organisms that inhabited our planet hundreds of millions of years ago.
Yet even unicellular organisms have tiny organelles that enable the microorganism to distinguish light from darkness, and move towards one or the other. The path leading from such archaic sensors to the human eye is long and winding, but if you have hundreds of millions of years to spare, you can certainly cover the entire path, step by step. You can do that because the eye is composed of many different parts. If every few generations a small mutation slightly changes one of these parts – say, the cornea becomes a bit more curved – after millions of generations these changes can result in a human eye. If the eye were a holistic entity, devoid of any parts, it could never have evolved by natural selection.
That’s why the theory of evolution cannot accept the idea of souls, at least if by ‘soul’ we mean something indivisible, immutable and potentially eternal. Such an entity cannot possibly result from a step-by-step evolution. Natural selection could produce a human eye, because the eye has parts. But the soul has no parts. If the Sapiens soul evolved step by step from the Erectus soul, what exactly were these steps? Is there some part of the soul that is more developed in Sapiens than in Erectus? But the soul has no parts.
You might argue that human souls did not evolve, but appeared one bright day in the fullness of their glory. But when exactly was that bright day? When we look closely at the evolution of humankind, it is embarrassingly difficult to find it. Every human that ever existed came into being as a result of male sperm inseminating a female egg. Think of the first baby to possess a soul. That baby was very similar to her mother and father, except that she had a soul and they didn’t. Our biological knowledge can certainly explain the birth of a baby whose cornea was a bit more curved than her parents’ corneas. A slight mutation in a single gene can account for that. But biology cannot explain the birth of a baby possessing an eternal soul from parents who did not have even a shred of a soul. Is a single mutation, or even several mutations, enough to give an animal an essence secure against all changes, including even death?
Hence the existence of souls cannot be squared with the theory of evolution. Evolution means change, and is incapable of producing everlasting entities. From an evolutionary perspective, the closest thing we have to a human essence is our DNA, and the DNA molecule is the vehicle of mutation rather than the seat of eternity. This terrifies large numbers of people, who prefer to reject the theory of evolution rather than give up their souls.
Another story employed to justify human superiority says that of all the animals on earth, only Homo sapiens has a conscious mind. Mind is something very different from soul. The mind isn’t some mystical eternal entity. Nor is it an organ such as the eye or the brain. Rather, the mind is a flow of subjective experiences, such as pain, pleasure, anger and love. These mental experiences are made of interlinked sensations, emotions and thoughts, which flash for a brief moment, and immediately disappear. Then other experiences flicker and vanish, arising for an instant and passing away. (When reflecting on it, we often try to sort the experiences into distinct categories such as sensations, emotions and thoughts, but in actuality they are all mingled together.) This frenzied collection of experiences constitutes the stream of consciousness. Unlike the everlasting soul, the mind has many parts, it constantly changes, and there is no reason to think it is eternal.
The soul is a story that some people accept while others reject. The stream of consciousness, in contrast, is the concrete reality we directly witness every moment. It is the surest thing in the world. You cannot doubt its existence. Even when we are consumed by doubt and ask ourselves: ‘Do subjective experiences really exist?’ we can be certain that we are experiencing doubt.
What exactly are the conscious experiences that constitute the flow of the mind? Every subjective experience has two fundamental characteristics: sensation and desire. Robots and computers have no consciousness because despite their myriad abilities they feel nothing and crave nothing. A robot may have an energy sensor that signals to its central processing unit when the battery is about to run out. The robot may then move towards an electrical socket, plug itself in and recharge its battery. However, throughout this process the robot doesn’t experience anything. In contrast, a human being depleted of energy feels hunger and craves to stop this unpleasant sensation. That’s why we say that humans are conscious beings and robots aren’t, and why it is a crime to make people work until they collapse from hunger and exhaustion, whereas making robots work until their batteries run out carries no moral opprobrium.
And what about animals? Are they conscious? Do they have subjective experiences? Is it okay to force a horse to work until he collapses from exhaustion? As noted earlier, the life sciences currently argue that all mammals and birds, and at least some reptiles and fish, have sensations and emotions. However, the most up-to-date theories also maintain that sensations and emotions are biochemical data-processing algorithms. Since we know that robots and computers process data without having any subjective experiences, maybe it works the same with animals? Indeed, we know that even in humans many sensory and emotional brain circuits can process data and initiate actions completely unconsciously. So perhaps behind all the sensations and emotions we ascribe to animals – hunger, fear, love and loyalty – lurk only unconscious algorithms rather than subjective experiences?2
This theory was upheld by the father of modern philosophy, René Descartes. In the seventeenth century Descartes maintained that only humans feel and crave, whereas all other animals are mindless automata, akin to a robot or a vending machine. When a man kicks a dog, the dog experiences nothing. The dog flinches and howls automatically, just like a humming vending machine that makes a cup of coffee without feeling or wanting anything.
This theory was widely accepted in Descartes’ day. Seventeenth-century doctors and scholars dissected live dogs and observed the working of their internal organs, without either anaesthetics or scruples. They didn’t see anything wrong with that, just as we don’t see anything wrong in opening the lid of a vending machine and observing its gears and conveyors. In the early twenty-first century there are still plenty of people who argue that animals have no consciousness, or at most, that they have a very different and inferior type of consciousness.
In order to decide whether animals have conscious minds similar to our own, we must first get a better understanding of how minds function, and what role they play. These are extremely difficult questions, but it is worthwhile to devote some time to them, because the mind will be the hero of several subsequent chapters. We won’t be able to grasp the full implications of novel technologies such as artificial intelligence if we don’t know what minds are. Hence let’s leave aside for a moment the particular question of animal minds, and examine what science knows about minds and consciousness in general. We will focus on examples taken from the study of human consciousness – which is more accessible to us – and later on return to animals and ask whether what’s true of humans is also true of our furry and feathery cousins.
To be frank, science knows surprisingly little about mind and consciousness. Current orthodoxy holds that consciousness is created by electrochemical reactions in the brain, and that mental experiences fulfil some essential data-processing function.3 However, nobody has any idea how a congeries of biochemical reactions and electrical currents in the brain creates the subjective experience of pain, anger or love. Perhaps we will have a solid explanation in ten or fifty years. But as of 2016, we have no such explanation, and we had better be clear about that.
Using fMRI scans, implanted electrodes and other sophisticated gadgets, scientists have certainly identified correlations and even causal links between electrical currents in the brain and various subjective experiences. Just by looking at brain activity, scientists can know whether you are awake, dreaming or in deep sleep. They can briefly flash an image in front of your eyes, just at the threshold of conscious perception, and determine (without asking you) whether you have become aware of the image or not. They have even managed to link individual brain neurons with specific mental content, discovering for example a ‘Bill Clinton’ neuron and a ‘Homer Simpson’ neuron. When the ‘Bill Clinton’ neuron is on, the person is thinking of the forty-second president of the USA; show the person an image of Homer Simpson, and the eponymous neuron is bound to ignite.
More broadly, scientists know that if an electric storm arises in a given brain area, you probably feel angry. If this storm subsides and a different area lights up – you are experiencing love. Indeed, scientists can even induce feelings of anger or love by electrically stimulating the right neurons. But how on earth does the movement of electrons from one place to the other translate into a subjective image of Bill Clinton, or a subjective feeling of anger or love?
The most common explanation points out that the brain is a highly complex system, with more than 80 billion neurons connected into numerous intricate webs. When billions of neurons send billions of electric signals back and forth, subjective experiences emerge. Even though the sending and receiving of each electric signal is a simple biochemical phenomenon, the interaction among all these signals creates something far more complex – the stream of consciousness. We observe the same dynamic in many other fields. The movement of a single car is a simple action, but when millions of cars move and interact simultaneously, traffic jams emerge. The buying and selling of a single share is simple enough, but when millions of traders buy and sell millions of shares it can lead to economic crises that dumbfound even the experts.
Yet this explanation explains nothing. It merely affirms that the problem is very complicated. It does not offer any insight into how one kind of phenomenon (billions of electric signals moving from here to there) creates a very different kind of phenomenon (subjective experiences of anger or love). The analogy to other complex processes such as traffic jams and economic crises is flawed. What creates a traffic jam? If you follow a single car, you will never understand it. The jam results from the interactions among many cars. Car A influences the movement of car B, which blocks the path of car C, and so on. Yet if you map the movements of all the relevant cars, and how each impacts the other, you will get a complete account of the traffic jam. It would be pointless to ask, ‘But how do all these movements create the traffic jam?’ For ‘traffic jam’ is simply the abstract term we humans decided to use for this particular collection of events.
In contrast, ‘anger’ isn’t an abstract term we have decided to use as a shorthand for billions of electric brain signals. Anger is an extremely concrete experience which people were familiar with long before they knew anything about electricity. When I say, ‘I am angry!’ I am pointing to a very tangible feeling. If you describe how a chemical reaction in a neuron results in an electric signal, and how billions of similar reactions result in billions of additional signals, it is still worthwhile to ask, ‘But how do these billions of events come together to create my concrete feeling of anger?’
When thousands of cars slowly edge their way through London, we call that a traffic jam, but it doesn’t create some great Londonian consciousness that hovers high above Piccadilly and says to itself, ‘Blimey, I feel jammed!’ When millions of people sell billions of shares, we call that an economic crisis, but no great Wall Street spirit grumbles, ‘Shit, I feel I am in crisis.’ When trillions of water molecules coalesce in the sky we call that a cloud, but no cloud consciousness emerges to announce, ‘I feel rainy.’ How is it, then, that when billions of electric signals move around in my brain, a mind emerges that feels ‘I am furious!’? As of 2016, we have absolutely no idea.
Hence if this discussion has left you confused and perplexed, you are in very good company. The best scientists too are a long way from deciphering the enigma of mind and consciousness. One of the wonderful things about science is that when scientists don’t know something, they can try out all kinds of theories and conjunctures, but in the end they can just admit their ignorance.
Scientists don’t know how a collection of electric brain signals creates subjective experiences. Even more crucially, they don’t know what could be the evolutionary benefit of such a phenomenon. It is the greatest lacuna in our understanding of life. Humans have feet, because for millions of generations feet enabled our ancestors to chase rabbits and escape lions. Humans have eyes, because for countless millennia eyes enabled our forebears to see whither the rabbit was heading and whence the lion was coming. But why do humans have subjective experiences of hunger and fear?
Not long ago, biologists gave a very simple answer. Subjective experiences are essential for our survival, because if we didn’t feel hunger or fear we would not have bothered to chase rabbits and flee lions. Upon seeing a lion, why did a man flee? Well, he was frightened, so he ran away. Subjective experiences explained human actions. Yet today scientists provide a much more detailed explanation. When a man sees a lion, electric signals move from the eye to the brain. The incoming signals stimulate certain neurons, which react by firing off more signals. These stimulate other neurons down the line, which fire in their turn. If enough of the right neurons fire at a sufficiently rapid rate, commands are sent to the adrenal glands to flood the body with adrenaline, the heart is instructed to beat faster, while neurons in the motor centre send signals down to the leg muscles, which begin to stretch and contract, and the man runs away from the lion.
Ironically, the better we map this process, the harder it becomes to explain conscious feelings. The better we understand the brain, the more redundant the mind seems. If the entire system works by electric signals passing from here to there, why the hell do we also need to feel fear? If a chain of electrochemical reactions leads all the way from the nerve cells in the eye to the movements of leg muscles, why add subjective experiences to this chain? What do they do? Countless domino pieces can fall one after the other without any need of subjective experiences. Why do neurons need feelings in order to stimulate one another, or in order to tell the adrenal gland to start pumping? Indeed, 99 per cent of bodily activities, including muscle movement and hormonal secretions, take place without any need of conscious feelings. So why do the neurons, muscles and glands need such feelings in the remaining 1 per cent of cases?
You might argue that we need a mind because the mind stores memories, makes plans and autonomously sparks completely new images and ideas. It doesn’t just respond to outside stimuli. For example, when a man sees a lion, he doesn’t react automatically to the sight of the predator. He remembers that a year ago a lion ate his aunt. He imagines how he would feel if a lion tore him to pieces. He contemplates the fate of his orphaned children. That’s why he flees. Indeed, many chain reactions begin with the mind’s own initiative rather than with any immediate external stimulus. Thus a memory of some prior lion attack might spontaneously pop up in a man’s mind, setting him thinking about the danger posed by lions. He then gets all the tribespeople together and they brainstorm novel methods for scaring lions away.
But wait a moment. What are all these memories, imaginations and thoughts? Where do they exist? According to current biological theories, our memories, imaginations and thoughts don’t exist in some higher immaterial field. Rather, they too are avalanches of electric signals fired by billions of neurons. Hence even when we figure in memories, imaginations and thoughts, we are still left with a series of electrochemical reactions that pass through billions of neurons, ending with the activity of adrenal glands and leg muscles.
Is there even a single step on this long and twisting journey where, between the action of one neuron and the reaction of the next, the mind intervenes and decides whether the second neuron should fire or not? Is there any material movement, of even a single electron, that is caused by the subjective experience of fear rather than by the prior movement of some other particle? If there is no such movement – and if every electron moves because another electron moved earlier – why do we need to experience fear? We have no clue.
Philosophers have encapsulated this riddle in a trick question: what happens in the mind that doesn’t happen in the brain? If nothing happens in the mind except what happens in our massive network of neurons – then why do we need the mind? If something does indeed happen in the mind over and above what happens in the neural network – where the hell does it happen? Suppose I ask you what Homer Simpson thought about Bill Clinton and the Monica Lewinsky scandal. You have probably never thought about this before, so your mind now needs to fuse two previously unrelated memories, perhaps conjuring up an image of Homer drinking beer while watching the president give his ‘I did not have sexual relations with that woman’ speech. Where does this fusion take place?
Some brain scientists argue that it happens in the ‘global workspace’ created by the interaction of many neurons.4 Yet the word ‘workspace’ is just a metaphor. What is the reality behind the metaphor? Where do the different pieces of information actually meet and fuse? According to current theories, it certainly doesn’t take place in some Platonic fifth dimension. Rather, it takes place, say, where two previously unconnected neurons suddenly start firing signals to one another. A new synapse is formed between the Bill Clinton neuron and the Homer Simpson neuron. But if so, why do we need the conscious experience of memory over and above the physical event of the two neurons connecting?
We can pose the same riddle in mathematical terms. Present-day dogma holds that organisms are algorithms, and that algorithms can be represented in mathematical formulas. You can use numbers and mathematical symbols to write the series of steps a vending machine takes to prepare a cup of tea, and the series of steps a brain takes when it is alarmed by the approach of a lion. If so, and if conscious experiences fulfil some important function, they must have a mathematical representation. For they are an essential part of the algorithm. When we write the fear algorithm, and break ‘fear’ down into a series of precise calculations, we should be able to point out: ‘Here, step number ninety-three in the calculation process – this is the subjective experience of fear!’ But is there any algorithm in the huge realm of mathematics that contains a subjective experience? So far, we don’t know of any such algorithm. Despite the vast knowledge we have gained in the fields of mathematics and computer science, none of the data-processing systems we have created needs subjective experiences in order to function, and none feels pain, pleasure, anger or love.5
Maybe we need subjective experiences in order to think about ourselves? An animal wandering the savannah and calculating its chances of survival and reproduction must represent its own actions and decisions to itself, and sometimes communicate them to other animals as well. As the brain tries to create a model of its own decisions, it gets trapped in an infinite digression, and abracadabra! Out of this loop, consciousness pops out.
Fifty years ago this might have sounded plausible, but not in 2016. Several corporations, such as Google and Tesla, are engineering autonomous cars that already cruise our roads. The algorithms controlling the autonomous car make millions of calculations each second concerning other cars, pedestrians, traffic lights and potholes. The autonomous car successfully stops at red lights, bypasses obstacles and keeps a safe distance from other vehicles – without feeling any fear. The car also needs to take itself into account and to communicate its plans and desires to the surrounding vehicles, because if it decides to swerve to the right, doing so will impact on their behaviour. The car does all that without any problem – but without any consciousness either. The autonomous car isn’t special. Many other computer programs make allowances for their own actions, yet none of them has developed consciousness, and none feels or desires anything.6
If we cannot explain the mind, and if we don’t know what function it fulfils, why not just discard it? The history of science is replete with abandoned concepts and theories. For instance, early modern scientists who tried to account for the movement of light postulated the existence of a substance called ether, which supposedly fills the entire universe. Light was thought to be waves of ether. However, scientists failed to find any empirical evidence for the existence of ether, whereas they did come up with alternative and better theories of light. Consequently, they threw ether into the dustbin of science.
15. The Google autonomous car on the road.
15. © Karl Mondon/ZUMA Press/Corbis.
Similarly, for thousands of years humans used God to explain numerous natural phenomena. What causes lightning to strike? God. What makes the rain fall? God. How did life on earth begin? God did it. Over the last few centuries scientists have not discovered any empirical evidence for God’s existence, while they did find much more detailed explanations for lightning strikes, rain and the origins of life. Consequently, with the exception of a few subfields of philosophy, no article in any peer-review scientific journal takes God’s existence seriously. Historians don’t argue that the Allies won the Second World War because God was on their side; economists don’t blame God for the 1929 economic crisis; and geologists don’t invoke His will to explain tectonic plate movements.
The same fate has befallen the soul. For thousands of years people believed that all our actions and decisions emanate from our souls. Yet in the absence of any supporting evidence, and given the existence of much more detailed alternative theories, the life sciences have ditched the soul. As private individuals, many biologists and doctors may go on believing in souls. Yet they never write about them in serious scientific journals.
Maybe the mind should join the soul, God and ether in the dustbin of science? After all, no one has ever seen experiences of pain or love through a microscope, and we have a very detailed biochemical explanation for pain and love that leaves no room for subjective experiences. However, there is a crucial difference between mind and soul (as well as between mind and God). Whereas the existence of eternal souls is pure conjecture, the experience of pain is a direct and very tangible reality. When I step on a nail, I can be 100 per cent certain that I feel pain (even if I so far lack a scientific explanation for it). In contrast, I cannot be certain that if the wound becomes infected and I die of gangrene, my soul will continue to exist. It’s a very interesting and comforting story which I would be happy to believe, but I have no direct evidence for its veracity. Since all scientists constantly experience subjective feelings such as pain and doubt, they cannot deny their existence.
Another way to dismiss mind and consciousness is to deny their relevance rather than their existence. Some scientists – such as Daniel Dennett and Stanislas Dehaene – argue that all relevant questions can be answered by studying brain activities, without any recourse to subjective experiences. So scientists can safely delete ‘mind’, ‘consciousness’ and ‘subjective experiences’ from their vocabulary and articles. However, as we shall see in the following chapters, the whole edifice of modern politics and ethics is built upon subjective experiences, and few ethical dilemmas can be solved by referring strictly to brain activities. For example, what’s wrong with torture or rape? From a purely neurological perspective, when a human is tortured or raped certain biochemical reactions happen in the brain, and various electrical signals move from one bunch of neurons to another. What could possibly be wrong with that? Most modern people have ethical qualms about torture and rape because of the subjective experiences involved. If any scientist wants to argue that subjective experiences are irrelevant, their challenge is to explain why torture or rape are wrong without reference to any subjective experience.
Finally, some scientists concede that consciousness is real and may actually have great moral and political value, but that it fulfils no biological function whatsoever. Consciousness is the biologically useless by-product of certain brain processes. Jet engines roar loudly, but the noise doesn’t propel the aeroplane forward. Humans don’t need carbon dioxide, but each and every breath fills the air with more of the stuff. Similarly, consciousness may be a kind of mental pollution produced by the firing of complex neural networks. It doesn’t do anything. It is just there. If this is true, it implies that all the pain and pleasure experienced by billions of creatures for millions of years is just mental pollution. This is certainly a thought worth thinking, even if it isn’t true. But it is quite amazing to realise that as of 2016, this is the best theory of consciousness that contemporary science has to offer us.
Maybe the life sciences view the problem from the wrong angle. They believe that life is all about data processing, and that organisms are machines for making calculations and taking decisions. However, this analogy between organisms and algorithms might mislead us. In the nineteenth century, scientists described brains and minds as if they were steam engines. Why steam engines? Because that was the leading technology of the day, which powered trains, ships and factories, so when humans tried to explain life, they assumed it must work according to analogous principles. Mind and body are made of pipes, cylinders, valves and pistons that build and release pressure, thereby producing movements and actions. Such thinking had a deep influence even on Freudian psychology, which is why much of our psychological jargon is still replete with concepts borrowed from mechanical engineering.
Consider, for example, the following Freudian argument: ‘Armies harness the sex drive to fuel military aggression. The army recruits young men just when their sexual drive is at its peak. The army limits the soldiers’ opportunities of actually having sex and releasing all that pressure, which consequently accumulates inside them. The army then redirects this pent-up pressure and allows it to be released in the form of military aggression.’ This is exactly how a steam engine works. You trap boiling steam inside a closed container. The steam builds up more and more pressure, until suddenly you open a valve, and release the pressure in a predetermined direction, harnessing it to propel a train or a loom. Not only in armies, but in all fields of activity, we often complain about the pressure building up inside us, and we fear that unless we ‘let off some steam’, we might explode.
In the twenty-first century it sounds childish to compare the human psyche to a steam engine. Today we know of a far more sophisticated technology – the computer – so we explain the human psyche as if it were a computer processing data rather than a steam engine regulating pressure. But this new analogy may turn out to be just as naïve. After all, computers have no minds. They don’t crave anything even when they have a bug, and the Internet doesn’t feel pain even when authoritarian regimes sever entire countries from the Web. So why use computers as a model for understanding the mind?
Well, are we really sure that computers have no sensations or desires? And even if they haven’t got any at present, perhaps once they become complex enough they might develop consciousness? If that were to happen, how could we ascertain it? When computers replace our bus driver, our teacher and our shrink, how could we determine whether they have feelings or whether they are just a collection of mindless algorithms?
When it comes to humans, we are today capable of differentiating between conscious mental experiences and non-conscious brain activities. Though we are far from understanding consciousness, scientists have succeeded in identifying some of its electrochemical signatures. To do so the scientists started with the assumption that whenever humans report that they are conscious of something, they can be believed. Based on this assumption the scientists could then isolate specific brain patterns that appear every time humans report being conscious, but that never appear during unconscious states.
This has allowed the scientists to determine, for example, whether a seemingly vegetative stroke victim has completely lost consciousness, or has merely lost control of his body and speech. If the patient’s brain displays the telltale signatures of consciousness, he is probably conscious, even though he cannot move or speak. Indeed, doctors have recently managed to communicate with such patients using fMRI imaging. They ask the patients yes/no questions, telling them to imagine themselves playing tennis if the answer is yes, and to visualise the location of their home if the answer is no. The doctors can then observe how the motor cortex lights up when patients imagine playing tennis (meaning ‘yes’), whereas ‘no’ is indicated by the activation of brain areas responsible for spatial memory.7
This is all very well for humans, but what about computers? Since silicon-based computers have very different structures to carbon-based human neural networks, the human signatures of consciousness may not be relevant to them. We seem to be trapped in a vicious circle. Starting with the assumption that we can believe humans when they report that they are conscious, we can identify the signatures of human consciousness, and then use these signatures to ‘prove’ that humans are indeed conscious. But if an artificial intelligence self-reports that it is conscious, should we just believe it?
So far, we have no good answer to this problem. Already thousands of years ago philosophers realised that there is no way to prove conclusively that anyone other than oneself has a mind. Indeed, even in the case of other humans, we just assume they have consciousness – we cannot know that for certain. Perhaps I am the only being in the entire universe who feels anything, and all other humans and animals are just mindless robots? Perhaps I am dreaming, and everyone I meet is just a character in my dream? Perhaps I am trapped inside a virtual world, and all the beings I see are merely simulations?
According to current scientific dogma, everything I experience is the result of electrical activity in my brain, and it should therefore be theoretically feasible to simulate an entire virtual world that I could not possibly distinguish from the ‘real’ world. Some brain scientists believe that in the not too distant future, we shall actually do such things. Well, maybe it has already been done – to you? For all you know, the year might be 2216 and you are a bored teenager immersed inside a ‘virtual world’ game that simulates the primitive and exciting world of the early twenty-first century. Once you acknowledge the mere feasibility of this scenario, mathematics leads you to a very scary conclusion: since there is only one real world, whereas the number of potential virtual worlds is infinite, the probability that you happen to inhabit the sole real world is almost zero.
None of our scientific breakthroughs has managed to overcome this notorious Problem of Other Minds. The best test that scholars have so far come up with is called the Turing Test, but it examines only social conventions. According to the Turing Test, in order to determine whether a computer has a mind, you should communicate simultaneously both with that computer and with a real person, without knowing which is which. You can ask whatever questions you want, you can play games, argue, and even flirt with them. Take as much time as you like. Then you need to decide which is the computer, and which is the human. If you cannot make up your mind, or if you make a mistake, the computer has passed the Turing Test, and we should treat it as if it really has a mind. However, that won’t really be a proof, of course. Acknowledging the existence of other minds is merely a social and legal convention.
The Turing Test was invented in 1950 by the British mathematician Alan Turing, one of the fathers of the computer age. Turing was also a gay man in a period when homosexuality was illegal in Britain. In 1952 he was convicted of committing homosexual acts and forced to undergo chemical castration. Two years later he committed suicide. The Turing Test is simply a replication of a mundane test every gay man had to undergo in 1950s Britain: can you pass for a straight man? Turing knew from personal experience that it didn’t matter who you really were – it mattered only what others thought about you. According to Turing, in the future computers would be just like gay men in the 1950s. It won’t matter whether computers will actually be conscious or not. It will matter only what people think about it.
Having acquainted ourselves with the mind – and with how little we really know about it – we can return to the question of whether other animals have minds. Some animals, such as dogs, certainly pass a modified version of the Turing Test. When humans try to determine whether an entity is conscious, what we usually look for is not mathematical aptitude or good memory, but rather the ability to create emotional relationships with us. People sometimes develop deep emotional attachments to fetishes like weapons, cars and even underwear, but these attachments are one-sided and never develop into relationships. The fact that dogs can be party to emotional relationships with humans convinces most dog owners that dogs are not mindless automata.
This, however, won’t satisfy sceptics, who point out that emotions are algorithms, and that no known algorithm requires consciousness in order to function. Whenever an animal displays complex emotional behaviour, we cannot prove that this is not the result of some very sophisticated but non-conscious algorithm. This argument, of course, can be applied to humans too. Everything a human does – including reporting on allegedly conscious states – might in theory be the work of non-conscious algorithms.
In the case of humans, we nevertheless assume that whenever someone reports that he or she is conscious, we can take their word for it. Based on this minimal assumption, we can today identify the brain signatures of consciousness, which can then be used systematically to differentiate conscious from non-conscious states in humans. Yet since animal brains share many features with human brains, as our understanding of the signatures of consciousness deepens, we might be able to use them to determine if and when other animals are conscious. If a canine brain shows similar patterns to those of a conscious human brain, this will provide strong evidence that dogs are conscious.
Initial tests on monkeys and mice indicate that at least monkey and mice brains indeed display the signatures of consciousness.8 However, given the differences between animal brains and human brains, and given that we are still far from deciphering all the secrets of consciousness, developing decisive tests that will satisfy the sceptics might take decades. Who should carry the burden of proof in the meantime? Do we consider dogs to be mindless machines until proven otherwise, or do we treat dogs as conscious beings as long as nobody comes up with some convincing counter-evidence?
On 7 July 2012 leading experts in neurobiology and the cognitive sciences gathered at the University of Cambridge, and signed the Cambridge Declaration on Consciousness, which says that ‘Convergent evidence indicates that non-human animals have the neuroanatomical, neurochemical and neurophysiological substrates of conscious states along with the capacity to exhibit intentional behaviours. Consequently, the weight of evidence indicates that humans are not unique in possessing the neurological substrates that generate consciousness. Nonhuman animals, including all mammals and birds, and many other creatures, including octopuses, also possess these neurological substrates.’9 This declaration stops short of saying that other animals are conscious, because we still lack the smoking gun. But it does shift the burden of proof to those who think otherwise.
Responding to the shifting winds of the scientific community, in May 2015 New Zealand became the first country in the world to legally recognise animals as sentient beings, when the New Zealand parliament passed the Animal Welfare Amendment Act. The Act stipulates that it is now obligatory to recognise animals as sentient, and hence attend properly to their welfare in contexts such as animal husbandry. In a country with far more sheep than humans (30 million vs 4.5 million), that is a very significant statement. The Canadian province of Quebec has since passed a similar Act, and other countries are likely to follow suit.
Many business corporations also recognise animals as sentient beings, though paradoxically, this often exposes the animals to rather unpleasant laboratory tests. For example, pharmaceutical companies routinely use rats as experimental subjects in the development of antidepressants. According to one widely used protocol, you take a hundred rats (for statistical reliability) and place each rat inside a glass tube filled with water. The rats struggle again and again to climb out of the tubes, without success. After fifteen minutes most give up and stop moving. They just float in the tube, apathetic to their surroundings.
You now take another hundred rats, throw them in, but fish them out of the tube after fourteen minutes, just before they are about to despair. You dry them, feed them, give them a little rest – and then throw them back in. The second time, most rats struggle for twenty minutes before calling it quits. Why the extra six minutes? Because the memory of past success triggers the release of some biochemical in the brain that gives the rats hope and delays the advent of despair. If we could only isolate this biochemical, we might use it as an antidepressant for humans. But numerous chemicals flood a rat’s brain at any given moment. How can we pinpoint the right one?
For this you take more groups of rats, who have never participated in the test before. You inject each group with a particular chemical, which you suspect to be the hoped-for antidepressant. You throw the rats into the water. If rats injected with chemical A struggle for only fifteen minutes before becoming depressed, you can cross out A on your list. If rats injected with chemical B go on thrashing for twenty minutes, you can tell the CEO and the shareholders that you might have just hit the jackpot.
16. Left: A hopeful rat struggling to escape the glass tube. Right: An apathetic rat floating in the glass tube, having lost all hope.
16. Adapted from Weiss, J.M., Cierpial, M.A. & West, C.H., ‘Selective breeding of rats for high and low motor activity in a swim test: toward a new animal model of depression’, Pharmacology, Biochemistry and Behavior 61:49–66 (1998).
Sceptics could object that this entire description needlessly humanises rats. Rats experience neither hope nor despair. Sometimes rats move quickly and sometimes they stand still, but they never feel anything. They are driven only by non-conscious algorithms. Yet if so, what’s the point of all these experiments? Psychiatric drugs are aimed to induce changes not just in human behaviour, but above all in human feeling. When customers go to a psychiatrist and say, ‘Doctor, give me something that will lift me out of this depression,’ they don’t want a mechanical stimulant that will cause them to flail about while still feeling blue. They want to feel cheerful. Conducting experiments on rats can help corporations develop such a magic pill only if they presuppose that rat behaviour is accompanied by human-like emotions. And indeed, this is a common presupposition in psychiatric laboratories.10
Another attempt to enshrine human superiority accepts that rats, dogs and other animals have consciousness, but argues that, unlike humans, they lack self-consciousness. They may feel depressed, happy, hungry or satiated, but they have no notion of self, and they are not aware that the depression or hunger they feel belongs to a unique entity called ‘I’.
This idea is as common as it is opaque. Obviously, when a dog feels hungry, he grabs a piece of meat for himself rather than serve food to another dog. Let a dog sniff a tree watered by the neighbourhood dogs, and he will immediately know whether it smells of his own urine, of the neighbour’s cute Labrador’s or of some stranger’s. Dogs react very differently to their own odour and to the odours of potential mates and rivals.11 So what does it mean that they lack self-consciousness?
A more sophisticated version of the argument says that there are different levels of self-consciousness. Only humans understand themselves as an enduring self that has a past and a future, perhaps because only humans can use language in order to contemplate their past experiences and future actions. Other animals exist in an eternal present. Even when they seem to remember the past or plan for the future, they are in fact reacting only to present stimuli and momentary urges.12 For instance, a squirrel hiding nuts for the winter doesn’t really remember the hunger he felt last winter, nor is he thinking about the future. He just follows a momentary urge, oblivious to the origins and purpose of this urge. That’s why even very young squirrels, who haven’t yet lived through a winter and hence cannot remember winter, nevertheless cache nuts during the summer.
Yet it is unclear why language should be a necessary condition for being aware of past or future events. The fact that humans use language to do so is hardly a proof. Humans also use language to express their love or their fear, but other animals may well experience and even express love and fear non-verbally. Indeed, humans themselves are often aware of past and future events without verbalising them. Especially in dream states, we can be aware of entire non-verbal narratives – which upon waking we struggle to describe in words.
Various experiments indicate that at least some animals – including birds such as parrots and scrub jays – do remember individual incidents and consciously plan for future eventualities.13 However, it is impossible to prove this beyond doubt, because no matter how sophisticated a behaviour an animal exhibits, sceptics can always claim that it results from unconscious algorithms in its brain rather than from conscious images in its mind.
To illustrate this problem consider the case of Santino, a male chimpanzee from the Furuvik Zoo in Sweden. To relieve the boredom in his compound Santino developed an exciting hobby: throwing stones at visitors to the zoo. In itself, this is hardly unique. Angry chimpanzees often throw stones, sticks and even excrement. However, Santino was planning his moves in advance. During the early morning, long before the zoo opened for visitors, Santino collected projectiles and placed them in a heap, without showing any visible signs of anger. Guides and visitors soon learned to be wary of Santino, especially when he was standing near his pile of stones, hence he had increasing difficulties in finding targets.
In May 2010, Santino responded with a new strategy. In the early morning he took bales of straw from his sleeping quarters and placed them close to the compound’s wall, where visitors usually gather to watch the chimps. He then collected stones and hid them under the straw. An hour or so later, when the first visitors approached, Santino kept his cool, showing no signs of irritation or aggression. Only when his victims were within range did Santino suddenly grab the stones from their hiding place and bombard the frightened humans, who would scuttle in all directions. In the summer of 2012 Santino sped up the arms race, caching stones not only under straw bales, but also in tree trunks, buildings and any other suitable hiding place.
Yet even Santino doesn’t satisfy the sceptics. How can we be certain that at 7 a.m., when Santino goes about secreting stones here and there, he is imagining how fun it will be to pelt the visiting humans at noon? Maybe Santino is driven by some non-conscious algorithm, just like a young squirrel hiding nuts ‘for winter’ even though he has never experienced winter?14
Similarly, say the sceptics, a male chimpanzee attacking a rival who hurt him weeks earlier isn’t really avenging the old insult. He is just reacting to a momentary feeling of anger, the cause of which is beyond him. When a mother elephant sees a lion threatening her calf, she rushes forward and risks her life not because she remembers that this is her beloved offspring whom she has been nurturing for months; rather, she is impelled by some unfathomable sense of hostility towards the lion. And when a dog jumps for joy when his owner comes home, the dog isn’t recognising the man who fed and cuddled him from infancy. He is simply overwhelmed by an unexplained ecstasy.15
We cannot prove or disprove any of these claims, because they are in fact variations on the Problem of Other Minds. Since we aren’t familiar with any algorithm that requires consciousness, anything an animal does can be seen as the product of non-conscious algorithms rather than of conscious memories and plans. So in Santino’s case too, the real question concerns the burden of proof. What is the most likely explanation for Santino’s behaviour? Should we assume that he is consciously planning for the future, and anyone who disagrees should provide some counter-evidence? Or is it more reasonable to think that the chimpanzee is driven by a non-conscious algorithm, and all he consciously feels is a mysterious urge to place stones under bales of straw?
And even if Santino doesn’t remember the past and doesn’t imagine the future, does it mean he lacks self-consciousness? After all, we ascribe self-consciousness to humans even when they are not busy remembering the past or dreaming about the future. For example, when a human mother sees her toddler wandering onto a busy road, she doesn’t stop to think about either past or future. Just like the mother elephant, she too just races to save her child. Why not say about her what we say about the elephant, namely that ‘when the mother rushed to save her baby from the oncoming danger, she did it without any self-consciousness. She was merely driven by a momentary urge’?
Similarly, consider a young couple kissing passionately on their first date, a soldier charging into heavy enemy fire to save a wounded comrade, or an artist drawing a masterpiece in a frenzy of brushstrokes. None of them stops to contemplate the past or the future. Does it mean they lack self-consciousness, and that their state of being is inferior to that of a politician giving an election speech about his past achievements and future plans?
In 2010 scientists conducted an unusually touching rat experiment. They locked a rat in a tiny cage, placed the cage within a much larger cell and allowed another rat to roam freely through that cell. The caged rat gave out distress signals, which caused the free rat also to exhibit signs of anxiety and stress. In most cases, the free rat proceeded to help her trapped companion, and after several attempts usually succeeded in opening the cage and liberating the prisoner. The researchers then repeated the experiment, this time placing chocolate in the cell. The free rat now had to choose between either liberating the prisoner, or enjoying the chocolate all by herself. Many rats preferred to first free their companion and share the chocolate (though quite a few behaved more selfishly, proving perhaps that some rats are meaner than others).
Sceptics dismissed these results, arguing that the free rat liberated the prisoner not out of empathy, but simply in order to stop the annoying distress signals. The rats were motivated by the unpleasant sensations they felt, and they sought nothing grander than ending these sensations. Maybe. But we could say exactly the same thing about us humans. When I donate money to a beggar, am I not reacting to the unpleasant sensations that the sight of the beggar causes me to feel? Do I really care about the beggar, or do I simply want to feel better myself?16
In essence, we humans are not that different from rats, dogs, dolphins or chimpanzees. Like them, we too have no soul. Like us, they too have consciousness and a complex world of sensations and emotions. Of course, every animal has its unique traits and talents. Humans too have their special gifts. We shouldn’t humanise animals needlessly, imagining that they are just a furrier version of ourselves. This is not only bad science, but it also prevents us from understanding and valuing other animals on their terms.
In the early 1900s, a horse called Clever Hans became a German celebrity. Touring Germany’s towns and villages, Hans showed off a remarkable grasp of the German language, and an even more remarkable mastery of mathematics. When asked, ‘Hans, what is four times three?’ Hans tapped his hoof twelve times. When shown a written message asking, ‘What is twenty minus eleven?’ Hans tapped nine times, with commendable Prussian precision.
In 1904 the German board of education appointed a special scientific commission headed by a psychologist to look into the matter. The thirteen members of the commission – which included a circus manager and a veterinarian – were convinced this must be a scam, but despite their best efforts they couldn’t uncover any fraud or subterfuge. Even when Hans was separated from his owner, and complete strangers presented him with the questions, Hans still got most of the answers right.
In 1907 the psychologist Oskar Pfungst began another investigation that finally revealed the truth. It turned out that Hans got the answers right by carefully observing the body language and facial expressions of his interlocutors. When Hans was asked what is four times three, he knew from past experience that the human was expecting him to tap his hoof a given number of times. He began tapping, while closely monitoring the human. As Hans approached the correct number of taps the human became more and more tense, and when Hans tapped the right number, the tension reached its peak. Hans knew how to recognise this by the human’s body posture and the look on the human’s face. He then stopped tapping, and watched how tension was replaced by amazement or laughter. Hans knew he had got it right.
Clever Hans is often given as an example of the way humans erroneously humanise animals, ascribing to them far more amazing abilities than they actually possess. In fact, however, the lesson is just the opposite. The story demonstrates that by humanising animals we usually underestimate animal cognition and ignore the unique abilities of other creatures. As far as maths goes, Hans was hardly a genius. Any eight-year-old kid could do much better. However, in his ability to deduce emotions and intentions from body language, Hans was a true genius. If a Chinese person were to ask me in Mandarin what is four times three, there is no way that I could correctly tap my foot twelve times simply by observing facial expressions and body language. Clever Hans enjoyed this ability because horses normally communicate with each other through body language. What was remarkable about Hans, however, is that he could use the method to decipher the emotions and intentions not only of his fellow horses, but also of unfamiliar humans.
17. Clever Hans on stage in 1904.
17. © 2004 TopFoto.
If animals are so clever, why don’t horses harness humans to carts, rats conduct experiments on us, and dolphins make us jump through hoops? Homo sapiens surely has some unique ability that enables it to dominate all the other animals. Having dismissed the overblown notions that Homo sapiens exists on an entirely different plain from other animals, or that humans possess some unique essence like soul or consciousness, we can finally climb down to the level of reality and examine the particular physical or mental abilities that give our species its edge.
Most studies cite tool production and intelligence as particularly important for the ascent of humankind. Though other animals also produce tools, there is little doubt that humans far surpass them in that field. Things are a bit less clear with regard to intelligence. An entire industry is devoted to defining and measuring intelligence but is a long way from reaching a consensus. Luckily, we don’t have to enter into that minefield, because no matter how one defines intelligence, it is quite clear that neither intelligence nor toolmaking by themselves can account for the Sapiens conquest of the world. According to most definitions of intelligence, a million years ago humans were already the most intelligent animals around, as well as the world’s champion toolmakers, yet they remained insignificant creatures with little impact on the surrounding ecosystem. They were obviously lacking some key feature other than intelligence and toolmaking.
Perhaps humankind eventually came to dominate the planet not because of some elusive third key ingredient, but due simply to the evolution of even higher intelligence and even better toolmaking abilities? It doesn’t seem so, because when we examine the historical record, we don’t see a direct correlation between the intelligence and toolmaking abilities of individual humans and the power of our species as a whole. Twenty thousand years ago, the average Sapiens probably had higher intelligence and better toolmaking skills than the average Sapiens of today. Modern schools and employers may test our aptitudes from time to time but, no matter how badly we do, the welfare state always guarantees our basic needs. In the Stone Age natural selection tested you every single moment of every single day, and if you flunked any of its numerous tests you were pushing up the daisies in no time. Yet despite the superior toolmaking abilities of our Stone Age ancestors, and despite their sharper minds and far more acute senses, 20,000 years ago humankind was much weaker than it is today.
Over those 20,000 years humankind moved from hunting mammoth with stone-tipped spears to exploring the solar system with spaceships not thanks to the evolution of more dexterous hands or bigger brains (our brains today seem actually to be smaller).17 Instead, the crucial factor in our conquest of the world was our ability to connect many humans to one another.18 Humans nowadays completely dominate the planet not because the individual human is far smarter and more nimble-fingered than the individual chimp or wolf, but because Homo sapiens is the only species on earth capable of co-operating flexibly in large numbers. Intelligence and toolmaking were obviously very important as well. But if humans had not learned to cooperate flexibly in large numbers, our crafty brains and deft hands would still be splitting flint stones rather than uranium atoms.
If cooperation is the key, how come the ants and bees did not beat us to the nuclear bomb even though they learned to cooperate en masse millions of years before us? Because their cooperation lacks flexibility. Bees cooperate in very sophisticated ways, but they cannot reinvent their social system overnight. If a hive faces a new threat or a new opportunity, the bees cannot, for example, guillotine the queen and establish a republic.
Social mammals such as elephants and chimpanzees cooperate far more flexibly than bees, but they do so only with small numbers of friends and family members. Their cooperation is based on personal acquaintance. If I am a chimpanzee and you are a chimpanzee and I want to cooperate with you, I must know you personally: what kind of chimp are you? Are you a nice chimp? Are you an evil chimp? How can I cooperate with you if I don’t know you? To the best of our knowledge, only Sapiens can cooperate in very flexible ways with countless numbers of strangers. This concrete capability – rather than an eternal soul or some unique kind of consciousness – explains our mastery of planet Earth.
History provides ample evidence for the crucial importance of large-scale cooperation. Victory almost invariably went to those who cooperated better – not only in struggles between Homo sapiens and other animals, but also in conflicts between different human groups. Thus Rome conquered Greece not because the Romans had larger brains or better toolmaking techniques, but because they were able to cooperate more effectively. Throughout history, disciplined armies easily routed disorganised hordes, and unified elites dominated the disorderly masses. In 1914, for example, 3 million Russian noblemen, officials and business people lorded it over 180 million peasants and workers. The Russian elite knew how to cooperate in defence of its common interests, whereas the 180 million commoners were incapable of effective mobilisation. Indeed, much of the elite’s efforts focused on ensuring that the 180 million people at the bottom would never learn to cooperate.
In order to mount a revolution, numbers are never enough. Revolutions are usually made by small networks of agitators rather than by the masses. If you want to launch a revolution, don’t ask yourself, ‘How many people support my ideas?’ Instead, ask yourself, ‘How many of my supporters are capable of effective collaboration?’ The Russian Revolution finally erupted not when 180 million peasants rose against the tsar, but rather when a handful of communists placed themselves at the right place at the right time. In 1917, at a time when the Russian upper and middle classes numbered at least 3 million people, the Communist Party had just 23,000 members.19 The communists nevertheless gained control of the vast Russian Empire because they organised themselves well. When authority in Russia slipped from the decrepit hands of the tsar and the equally shaky hands of Kerensky’s provisional government, the communists seized it with alacrity, gripping the reins of power like a bulldog locking its jaws on a bone.
The communists didn’t release their grip until the late 1980s. Effective organisation kept them in power for eight long decades, and they eventually fell due to defective organisation. On 21 December 1989 Nicolae Ceaus¸escu, the communist dictator of Romania, organised a mass demonstration of support in the centre of Bucharest. Over the previous months the Soviet Union had withdrawn its support from the eastern European communist regimes, the Berlin Wall had fallen, and revolutions had swept Poland, East Germany, Hungary, Bulgaria and Czechoslovakia. Ceaus¸escu, who had ruled Romania since 1965, believed he could withstand the tsunami, even though riots against his rule had erupted in the Romanian city of Timis¸oara on 17 December. As one of his counter-measures, Ceaus¸escu arranged a massive rally in Bucharest to prove to Romanians and the rest of the world that the majority of the populace still loved him – or at least feared him. The creaking party apparatus mobilised 80,000 people to fill the city’s central square, and citizens throughout Romania were instructed to stop all their activities and tune in on their radios and televisions.
To the cheering of the seemingly enthusiastic crowd, Ceauşescu mounted the balcony overlooking the square, as he had done scores of times in previous decades. Flanked by his wife, Elena, leading party officials and a bevy of bodyguards, Ceaus¸escu began delivering one of his trademark dreary speeches. For eight minutes he praised the glories of Romanian socialism, looking very pleased with himself as the crowd clapped mechanically. And then something went wrong. You can see it for yourself on YouTube. Just search for ‘Ceauşescu’s last speech’, and watch history in action.20
The YouTube clip shows Ceaus¸escu starting another long sentence, saying, ‘I want to thank the initiators and organisers of this great event in Bucharest, considering it as a—’, and then he falls silent, his eyes open wide, and he freezes in disbelief. He never finished the sentence. You can see in that split second how an entire world collapses. Somebody in the audience booed. People still argue today who was the first person who dared to boo. And then another person booed, and another, and another, and within a few seconds the masses began whistling, shouting abuse and calling out ‘Ti-mi-şoa-ra! Ti-mi-şoa-ra!’
18. The moment a world collapses: a stunned Ceaus¸escu cannot believe his eyes and ears.
18. Film still taken from www.youtube.com/watch?v=wWIbCtz_Xwk©TVR.
All this happened live on Romanian television, as three-quarters of the populace sat glued to the screens, their hearts throbbing wildly. The notorious secret police – the Securitate – immediately ordered the broadcast to be stopped, but the television crews disobeyed. The cameraman pointed the camera towards the sky so that viewers couldn’t see the panic among the party leaders on the balcony, but the soundman kept recording, and the technicians continued the transmission. The whole of Romania heard the crowd booing, while Ceaus¸escu yelled, ‘Hello! Hello! Hello!’ as if the problem was with the microphone. His wife Elena began scolding the audience, ‘Be quiet! Be quiet!’ until Ceaus¸escu turned and yelled at her – still live on television – ‘You be quiet!’ Ceaus¸escu then appealed to the excited crowds in the square, imploring them, ‘Comrades! Comrades! Be quiet, comrades!’
But the comrades were unwilling to be quiet. Communist Romania crumbled when 80,000 people in the Bucharest central square realised they were much stronger than the old man in the fur hat on the balcony. What is truly astounding, however, is not the moment the system collapsed, but the fact that it managed to survive for decades. Why are revolutions so rare? Why do the masses sometimes clap and cheer for centuries on end, doing everything the man on the balcony commands them, even though they could in theory charge forward at any moment and tear him to pieces?
Ceauşescu and his cronies dominated 20 million Romanians for four decades because they ensured three vital conditions. First, they placed loyal communist apparatchiks in control of all networks of cooperation, such as the army, trade unions and even sports associations. Second, they prevented the creation of any rival organisations – whether political, economic or social – which might serve as a basis for anti-communist cooperation. Third, they relied on the support of sister communist parties in the Soviet Union and eastern Europe. Despite occasional tensions, these parties helped each other in times of need, or at least guaranteed that no outsider poked his nose into the socialist paradise. Under such conditions, despite all the hardship and suffering inflicted on them by the ruling elite, the 20 million Romanians were unable to organise any effective opposition.
Ceauşescu fell from power only once all three conditions no longer held. In the late 1980s the Soviet Union withdrew its protection and the communist regimes began falling like dominoes. By December 1989 Ceaus¸escu could not expect any outside assistance. Just the opposite – revolutions in nearby countries gave heart to the local opposition. The Communist Party itself began splitting into rival camps. The moderates wished to rid themselves of Ceaus¸escu and initiate reforms before it was too late. By organising the Bucharest demonstration and broadcasting it live on television, Ceaus¸escu himself provided the revolutionaries with the perfect opportunity to discover their power and rally against him. What quicker way to spread a revolution than by showing it on TV?
Yet when power slipped from the hands of the clumsy organiser on the balcony, it did not pass to the masses in the square. Though numerous and enthusiastic, the crowds did not know how to organise themselves. Hence just as in Russia in 1917, power passed to a small group of political players whose only asset was good organisation. The Romanian Revolution was hijacked by the self-proclaimed National Salvation Front, which was in fact a smokescreen for the moderate wing of the Communist Party. The Front had no real ties to the demonstrating crowds. It was manned by mid-ranking party officials, and led by Ion Iliescu, a former member of the Communist Party’s central committee and one-time head of the propaganda department. Iliescu and his comrades in the National Salvation Front reinvented themselves as democratic politicians, proclaimed to any available microphone that they were the leaders of the revolution, and then used their long experience and network of cronies to take control of the country and pocket its resources.
In communist Romania almost everything was owned by the state. Democratic Romania quickly privatised its assets, selling them at bargain prices to the ex-communists, who alone grasped what was happening and collaborated to feather each other’s nests. Government companies that controlled national infrastructure and natural resources were sold to former communist officials at end-of-season prices while the party’s foot soldiers bought houses and apartments for pennies.
Ion Iliescu was elected president of Romania, while his colleagues became ministers, parliament members, bank directors and multimillionaires. The new Romanian elite that controls the country to this day is composed mostly of former communists and their families. The masses who risked their necks in Timis¸oara and Bucharest settled for scraps, because they did not know how to cooperate and how to create an efficient organisation to look after their own interests.21
A similar fate befell the Egyptian Revolution of 2011. What television did in 1989, Facebook and Twitter did in 2011. The new media helped the masses coordinate their activities, so that thousands of people flooded the streets and squares at the right moment and toppled the Mubarak regime. However, it is one thing to bring 100,000 people to Tahrir Square, and quite another to get a grip on the political machinery, shake the right hands in the right back rooms and run a country effectively. Consequently, when Mubarak stepped down the demonstrators could not fill the vacuum. Egypt had only two institutions sufficiently organised to rule the country: the army and the Muslim Brotherhood. Hence the revolution was hijacked first by the Brotherhood, and eventually by the army.
The Romanian ex-communists and the Egyptian generals were not more intelligent or nimble-fingered than either the old dictators or the demonstrators in Bucharest and Cairo. Their advantage lay in flexible cooperation. They cooperated better than the crowds, and they were willing to show far more flexibility than the hidebound Ceaus¸escu and Mubarak.
If Sapiens rule the world because we alone can cooperate flexibly in large numbers, then this undermines our belief in the sacredness of human beings. We tend to think that we are special, and deserve all kinds of privileges. As proof, we point to the amazing achievements of our species: we built the pyramids and the Great Wall of China; we deciphered the structure of atoms and DNA molecules; we reached the South Pole and the moon. If these accomplishments resulted from some unique essence that each individual human has – an immortal soul, say – then it would make sense to sanctify human life. Yet since these triumphs actually result from mass cooperation, it is far less clear why they should make us revere individual humans.
A beehive has much greater power than an individual butterfly, yet that doesn’t imply a bee is therefore more hallowed than a butterfly. The Romanian Communist Party successfully dominated the disorganised Romanian population. Does it follow that the life of a party member was more sacred than the life of an ordinary citizen? Humans know how to cooperate far more effectively than chimpanzees, which is why humans launch spaceships to the moon whereas chimpanzees throw stones at zoo visitors. Does it mean that humans are superior beings?
Well, maybe. It depends on what enables humans to cooperate so well in the first place. Why are humans alone able to construct such large and sophisticated social systems? Social cooperation among most social mammals such as chimpanzees, wolves and dolphins relies on intimate acquaintance. Among common chimpanzees, individuals will go hunting together only after they have got to know each other well and established a social hierarchy. Hence chimpanzees spend a lot of time in social interactions and power struggles. When alien chimpanzees meet, they usually cannot cooperate, but instead scream at each other, fight or flee as quickly as possible.
Among pygmy chimpanzees – also known as bonobos – things are a bit different. Bonobos often use sex in order to dispel tensions and cement social bonds. Not surprisingly, homosexual intercourse is consequently very common among them. When two alien groups of bonobos encounter one another, at first they display fear and hostility, and the jungle is filled with howls and screams. Soon enough, however, females from one group cross no-chimp’s-land, and invite the strangers to make love instead of war. The invitation is usually accepted, and within a few minutes the potential battlefield teems with bonobos having sex in almost every conceivable posture, including hanging upside down from trees.
Sapiens know these cooperative tricks well. They sometimes form power hierarchies similar to those of common chimpanzees, whereas on other occasions they cement social bonds with sex just like bonobos. Yet personal acquaintance – whether it involves fighting or copulating – cannot form the basis for large-scale cooperation. You cannot settle the Greek debt crisis by inviting Greek politicians and German bankers to either a fist fight or an orgy. Research indicates that Sapiens just can’t have intimate relations (whether hostile or amorous) with more than 150 individuals.22 Whatever enables humans to organise mass-cooperation networks, it isn’t intimate relations.
This is bad news for psychologists, sociologists, economists and others who try to decipher human society through laboratory experiments. For both organisational and financial reasons, the vast majority of experiments are conducted either on individuals or on small groups of participants. Yet it is risky to extrapolate from small-group behaviour to the dynamics of mass societies. A nation of 100 million people functions in a fundamentally different way to a band of a hundred individuals.
Take, for example, the Ultimatum Game – one of the most famous experiments in behavioural economics. This experiment is usually conducted on two people. One of them gets $100, which he must divide between himself and the other participant in any way he wants. He may keep everything, split the money in half or give most of it away. The other player can do one of two things: accept the suggested division, or reject it outright. If he rejects the division, nobody gets anything.
Classical economic theories maintain that humans are rational calculating machines. They propose that most people will keep $99, and offer $1 to the other participant. They further propose that the other participant will accept the offer. A rational person offered a dollar will always say yes. What does he care if the other player gets $99?
Classical economists have probably never left their laboratories and lecture halls to venture into the real world. Most people playing the Ultimatum Game reject very low offers because they are ‘unfair’. They prefer losing a dollar to looking like suckers. Since this is how the real world functions, few people make very low offers in the first place. Most people divide the money equally, or give themselves only a moderate advantage, offering $30 or $40 to the other player.
The Ultimatum Game made a significant contribution to undermining classical economic theories and to establishing the most important economic discovery of the last few decades: Sapiens don’t behave according to a cold mathematical logic, but rather according to a warm social logic. We are ruled by emotions. These emotions, as we saw earlier, are in fact sophisticated algorithms that reflect the social mechanisms of ancient hunter-gatherer bands. If 30,000 years ago I helped you hunt a wild chicken and you then kept almost all the chicken to yourself, offering me just one wing, I did not say to myself: ‘Better one wing than nothing at all.’ Instead my evolutionary algorithms kicked in, adrenaline and testosterone flooded my system, my blood boiled, and I stamped my feet and shouted at the top of my voice. In the short term I may have gone hungry, and even risked a punch or two. But it paid off in the long term, because you thought twice before ripping me off again. We refuse unfair offers because people who meekly accepted unfair offers didn’t survive in the Stone Age.
Observations of contemporary hunter-gatherer bands support this idea. Most bands are highly egalitarian, and when a hunter comes back to camp carrying a fat deer, everybody gets a share. The same is true of chimpanzees. When one chimp kills a piglet, the other troop members will gather round him with outstretched hands, and usually they all get a piece.
In another recent experiment, the primatologist Frans de Waal placed two capuchin monkeys in two adjacent cages, so that each could see everything the other was doing. De Waal and his colleagues placed small stones inside each cage, and trained the monkeys to give them these stones. Whenever a monkey handed over a stone, he received food in exchange. At first the reward was a piece of cucumber. Both monkeys were very pleased with that, and happily ate their cucumber. After a few rounds de Waal moved to the next stage of the experiment. This time, when the first monkey surrendered a stone, he got a grape. Grapes are much more tasty than cucumbers. However, when the second monkey gave a stone, he still received a piece of cucumber. The second monkey, who was previously very happy with his cucumber, became incensed. He took the cucumber, looked at it in disbelief for a moment, and then threw it at the scientists in anger and began jumping and screeching loudly. He ain’t a sucker.23
This hilarious experiment (which you can see for yourself on YouTube), along with the Ultimatum Game, has led many to believe that primates have a natural morality, and that equality is a universal and timeless value. People are egalitarian by nature, and unequal societies can never function well due to resentment and dissatisfaction.
But is that really so? These theories may work well on chimpanzees, capuchin monkeys and small hunter-gatherer bands. They also work well in the lab, where you test them on small groups of people. Yet once you observe the behaviour of human masses you discover a completely different reality. Most human kingdoms and empires were extremely unequal, yet many of them were surprisingly stable and efficient. In ancient Egypt, the pharaoh sprawled on comfortable cushions inside a cool and sumptuous palace, wearing golden sandals and gem-studded tunics, while beautiful maids popped sweet grapes into his mouth. Through the open window he could see the peasants in the fields, toiling in dirty rags under a merciless sun, and blessed was the peasant who had a cucumber to eat at the end of the day. Yet the peasants rarely revolted.
In 1740 King Frederick II of Prussia invaded Silesia, thus commencing a series of bloody wars that earned him his sobriquet Frederick the Great, turned Prussia into a major power and left hundreds of thousands of people dead, crippled or destitute. Most of Frederick’s soldiers were hapless recruits, subject to iron discipline and draconian drill. Not surprisingly, the soldiers lost little love on their supreme commander. As Frederick watched his troops assemble for the invasion, he told one of his generals that what struck him most about the scene was that ‘we are standing here in perfect safety, looking at 60,000 men – they are all our enemies, and there is not one of them who is not better armed and stronger than we are, and yet they all tremble in our presence, while we have no reason whatsoever to be afraid of them’.24 Frederick could indeed watch them in perfect safety. During the following years, despite all the hardships of war, these 60,000 armed men never revolted against him – indeed, many of them served him with exceptional courage, risking and even sacrificing their very lives.
Why did the Egyptian peasants and Prussian soldiers act so differently than we would have expected on the basis of the Ultimatum Game and the capuchin monkeys experiment? Because large numbers of people behave in a fundamentally different way than do small numbers. What would scientists see if they conducted the Ultimatum Game experiment on two groups of 1 million people each, who had to share $100 billion?
They would probably have witnessed strange and fascinating dynamics. For example, since 1 million people cannot make decisions collectively, each group might sprout a small ruling elite. What if one elite offers the other $10 billion, keeping $90 billion? The leaders of the second group might well accept this unfair offer, siphon most of the $10 billion into their Swiss bank accounts, while preventing rebellion among their followers with a combination of sticks and carrots. The leadership might threaten to severely punish dissidents forthwith, while promising the meek and patient everlasting rewards in the afterlife. This is what happened in ancient Egypt and eighteenth-century Prussia, and this is how things still work out in numerous countries around the world.
Such threats and promises often succeed in creating stable human hierarchies and mass-cooperation networks, as long as people believe that they reflect the inevitable laws of nature or the divine commands of God, rather than just human whims. All large-scale human cooperation is ultimately based on our belief in imagined orders. These are sets of rules that, despite existing only in our imagination, we believe to be as real and inviolable as gravity. ‘If you sacrifice ten bulls to the sky god, the rain will come; if you honour your parents, you will go to heaven; and if you don’t believe what I am telling you – you’ll go to hell.’ As long as all Sapiens living in a particular locality believe in the same stories, they all follow the same rules, making it easy to predict the behaviour of strangers and to organise mass-cooperation networks. Sapiens often use visual marks such as a turban, a beard or a business suit to signal ‘you can trust me, I believe in the same story as you’. Our chimpanzee cousins cannot invent and spread such stories, which is why they cannot cooperate in large numbers.
People find it difficult to understand the idea of ‘imagined orders’ because they assume that there are only two types of realities: objective realities and subjective realities. In objective reality, things exist independently of our beliefs and feelings. Gravity, for example, is an objective reality. It existed long before Newton, and it affects people who don’t believe in it just as much as it affects those who do.
Subjective reality, in contrast, depends on my personal beliefs and feelings. Thus, suppose I feel a sharp pain in my head and go to the doctor. The doctor checks me thoroughly, but finds nothing wrong. So she sends me for a blood test, urine test, DNA test, X-ray, electrocardiogram, fMRI scan and a plethora of other procedures. When the results come in she announces that I am perfectly healthy, and I can go home. Yet I still feel a sharp pain in my head. Even though every objective test has found nothing wrong with me, and even though nobody except me feels the pain, for me the pain is 100 per cent real.
Most people presume that reality is either objective or subjective, and that there is no third option. Hence once they satisfy themselves that something isn’t just their own subjective feeling, they jump to the conclusion it must be objective. If lots of people believe in God; if money makes the world go round; and if nationalism starts wars and builds empires – then these things aren’t just a subjective belief of mine. God, money and nations must therefore be objective realities.
However, there is a third level of reality: the intersubjective level. Intersubjective entities depend on communication among many humans rather than on the beliefs and feelings of individual humans. Many of the most important agents in history are intersubjective. Money, for example, has no objective value. You cannot eat, drink or wear a dollar bill. Yet as long as billions of people believe in its value, you can use it to buy food, beverages and clothing. If the baker suddenly loses his faith in the dollar bill and refuses to give me a loaf of bread for this green piece of paper, it doesn’t matter much. I can just go down a few blocks to the nearby supermarket. However, if the supermarket cashiers also refuse to accept this piece of paper, along with the hawkers in the market and the salespeople in the mall, then the dollar will lose its value. The green pieces of paper will go on existing, of course, but they will be worthless.
Such things actually happen from time to time. On 3 November 1985 the Myanmar government unexpectedly announced that banknotes of twenty-five, fifty and a hundred kyats were no longer legal tender. People were given no opportunity to exchange the notes, and savings of a lifetime were instantaneously turned into heaps of worthless paper. To replace the defunct notes, the government introduced new seventy-five-kyat bills, allegedly in honour of the seventy-fifth birthday of Myanmar’s dictator, General Ne Win. In August 1986, banknotes of fifteen kyats and thirty-five kyats were issued. Rumour had it that the dictator, who had a strong faith in numerology, believed that fifteen and thirty-five are lucky numbers. They brought little luck to his subjects. On 5 September 1987 the government suddenly decreed that all thirty-five and seventy-five notes were no longer money.
The value of money is not the only thing that might evaporate once people stop believing in it. The same can happen to laws, gods and even entire empires. One moment they are busy shaping the world, and the next moment they no longer exist. Zeus and Hera were once important powers in the Mediterranean basin, but today they lack any authority because nobody believes in them. The Soviet Union could once destroy the entire human race, yet it ceased to exist at the stroke of a pen. At 2 p.m. on 8 December 1991, in a state dacha near Viskuli, the leaders of Russia, Ukraine and Belarus signed the Belavezha Accords, which stated that ‘We, the Republic of Belarus, the Russian Federation and Ukraine, as founding states of the USSR that signed the union treaty of 1922, hereby establish that the USSR as a subject of international law and a geopolitical reality ceases its existence.’25 And that was that. No more Soviet Union.
It is relatively easy to accept that money is an intersubjective reality. Most people are also happy to acknowledge that ancient Greek gods, evil empires and the values of alien cultures exist only in the imagination. Yet we don’t want to accept that our God, our nation or our values are mere fictions, because these are the things that give meaning to our lives. We want to believe that our lives have some objective meaning, and that our sacrifices matter to something beyond the stories in our head. Yet in truth the lives of most people have meaning only within the network of stories they tell one another.
Meaning is created when many people weave together a common network of stories. Why does a particular action – such as getting married in church, fasting on Ramadan or voting on election day – seem meaningful to me? Because my parents also think it is meaningful, as do my brothers, my neighbours, people in nearby cities and even the residents of far-off countries. And why do all these people think it is meaningful? Because their friends and neighbours also share the same view. People constantly reinforce each other’s beliefs in a self-perpetuating loop. Each round of mutual confirmation tightens the web of meaning further, until you have little choice but to believe what everyone else believes.
19. Signing the Belavezha Accords. Pen touches paper – and abracadabra! The Soviet Union disappears.
19. © NOVOSTI/AFP/Getty Images.
Yet over decades and centuries the web of meaning unravels and a new web is spun in its place. To study history means to watch the spinning and unravelling of these webs, and to realise that what seems to people in one age the most important thing in life becomes utterly meaningless to their descendants.
In 1187 Saladin defeated the crusader army at the Battle of Hattin and conquered Jerusalem. In response the Pope launched the Third Crusade to recapture the holy city. Imagine a young English nobleman named John, who left home to fight Saladin. John believed that his actions had an objective meaning. He believed that if he died on the crusade, after death his soul would ascend to heaven, where it would enjoy everlasting celestial joy. He would have been horrified to learn that the soul and heaven are just stories invented by humans. John wholeheartedly believed that if he reached the Holy Land, and if some Muslim warrior with a big moustache brought an axe down on his head, he would feel an unbearable pain, his ears would ring, his legs would crumble under him, his field of vision would turn black – and the very next moment he would see brilliant light all around him, he would hear angelic voices and melodious harps, and radiant winged cherubs would beckon him through a magnificent golden gate.
John had a very strong faith in all this, because he was enmeshed within an extremely dense and powerful web of meaning. His earliest memories were of Grandpa Henry’s rusty sword, hanging in the castle’s main hall. Ever since he was a toddler John had heard stories of Grandpa Henry who died on the Second Crusade and who is now resting with the angels in heaven, watching over John and his family. When minstrels visited the castle, they usually sang about the brave crusaders who fought in the Holy Land. When John went to church, he enjoyed looking at the stained-glass windows. One showed Godfrey of Bouillon riding a horse and impaling a wicked-looking Muslim on his lance. Another showed the souls of sinners burning in hell. John listened attentively to the local priest, the most learned man he knew. Almost every Sunday, the priest explained – with the help of well-crafted parables and hilarious jokes – that there was no salvation outside the Catholic Church, that the Pope in Rome was our holy father and that we always had to obey his commands. If we murdered or stole, God would send us to hell; but if we killed infidel Muslims, God would welcome us to heaven.
One day when John was just turning eighteen a dishevelled knight rode to the castle’s gate, and in a choked voice announced the news: Saladin has destroyed the crusader army at Hattin! Jerusalem has fallen! The Pope has declared a new crusade, promising eternal salvation to anyone who dies on it! All around, people looked shocked and worried, but John’s face lit up in an otherworldly glow and he proclaimed: ‘I am going to fight the infidels and liberate the Holy Land!’ Everyone fell silent for a moment, and then smiles and tears appeared on their faces. His mother wiped her eyes, gave John a big hug and told him how proud she was of him. His father gave him a mighty pat on the back, and said: ‘If only I was your age, son, I would join you. Our family’s honour is at stake – I am sure you won’t disappoint us!’ Two of his friends announced that they were coming too. Even John’s sworn rival, the baron on the other side of the river, paid a visit to wish him Godspeed.
As he left the castle, villagers came forth from their hovels to wave to him, and all the pretty girls looked longingly at the brave crusader setting off to fight the infidels. When he set sail from England and made his way through strange and distant lands – Normandy, Provence, Sicily – he was joined by bands of foreign knights, all with the same destination and the same faith. When the army finally disembarked in the Holy Land and waged battle with Saladin’s hosts, John was amazed to discover that even the wicked Saracens shared his beliefs. True, they were a bit confused, thinking that the Christians were the infidels and that the Muslims were obeying God’s will. Yet they too accepted the basic principle that those fighting for God and Jerusalem will go straight to heaven when they die.
In such a way, thread by thread, medieval civilisation spun its web of meaning, trapping John and his contemporaries like flies. It was inconceivable to John that all these stories were just figments of the imagination. Maybe his parents and uncles were wrong. But the minstrels too, and all his friends, and the village girls, the learned priest, the baron on the other side of the river, the Pope in Rome, the Provençal and Sicilian knights, and even the very Muslims – is it possible that they were all hallucinating?
And the years pass. As the historian watches, the web of meaning unravels and another is spun in its stead. John’s parents die, followed by all his siblings and friends. Instead of minstrels singing about the crusades, the new fashion is stage plays about tragic love affairs. The family castle burns to the ground and, when it is rebuilt, no trace is found of Grandpa Henry’s sword. The church windows shatter in a winter storm and the replacement glass no longer depicts Godfrey of Bouillon and the sinners in hell, but rather the great triumph of the king of England over the king of France. The local priest has stopped calling the Pope ‘our holy father’ – he is now referred to as ‘that devil in Rome’. In the nearby university scholars pore over ancient Greek manuscripts, dissect dead bodies and whisper quietly behind closed doors that perhaps there is no such thing as the soul.
And the years continue to pass. Where the castle once stood, there is now a shopping mall. In the local cinema they are screening Monty Python and the Holy Grail for the umpteenth time. In an empty church a bored vicar is overjoyed to see two Japanese tourists. He explains at length about the stained-glass windows, while they politely smile, nodding in complete incomprehension. On the steps outside a gaggle of teenagers are playing with their iPhones. They watch a new YouTube remix of John Lennon’s ‘Imagine’. ‘Imagine there’s no heaven,’ sings Lennon, ‘it’s easy if you try.’ A Pakistani street cleaner is sweeping the pavement, while a nearby radio broadcasts the news: the carnage in Syria continues, and the Security Council’s meeting has ended in an impasse. Suddenly a hole in time opens, a mysterious ray of light illuminates the face of one of the teenagers, who announces: ‘I am going to fight the infidels and liberate the Holy Land!’
Infidels and Holy Land? These words no longer carry any meaning for most people in today’s England. Even the vicar would probably think the teenager is having some sort of psychotic episode. In contrast, if an English youth decided to join Amnesty International and travel to Syria to protect the human rights of refugees, he will be seen as a hero. In the Middle Ages people would have thought he had gone bonkers. Nobody in twelfth-century England knew what human rights were. You want to travel to the Middle East and risk your life not in order to kill Muslims, but to protect one group of Muslims from another? You must be out of your mind.
That’s how history unfolds. People weave a web of meaning, believe in it with all their heart, but sooner or later the web unravels, and when we look back we cannot understand how anybody could have taken it seriously. With hindsight, going on crusade in the hope of reaching Paradise sounds like utter madness. With hindsight, the Cold War seems even madder. How come thirty years ago people were willing to risk nuclear holocaust because of their belief in a communist paradise? A hundred years hence, our belief in democracy and human rights might look equally incomprehensible to our descendants.
Sapiens rule the world because only they can weave an intersubjective web of meaning: a web of laws, forces, entities and places that exist purely in their common imagination. This web allows humans alone to organise crusades, socialist revolutions and human rights movements.
Other animals may also imagine various things. A cat waiting to ambush a mouse might not see the mouse, but may well imagine the shape and even taste of the mouse. Yet to the best of our knowledge, cats are able to imagine only things that actually exist in the world, like mice. They cannot imagine things that they have never seen or smelled or tasted – such as the US dollar, the Google corporation or the European Union. Only Sapiens can imagine such chimeras.
Consequently, whereas cats and other animals are confined to the objective realm and use their communication systems merely to describe reality, Sapiens use language to create completely new realities. During the last 70,000 years the intersubjective realities that Sapiens invented became ever more powerful, so that today they dominate the world. Will the chimpanzees, the elephants, the Amazon rainforests and the Arctic glaciers survive the twenty-first century? This depends on the wishes and decisions of intersubjective entities such as the European Union and the World Bank; entities that exist only in our shared imagination.
No other animal can stand up to us, not because they lack a soul or a mind, but because they lack the necessary imagination. Lions can run, jump, claw and bite. Yet they cannot open a bank account or file a lawsuit. And in the twenty-first century, a banker who knows how to file a lawsuit is far more powerful than the most ferocious lion in the savannah.
As well as separating humans from other animals, this ability to create intersubjective entities also separates the humanities from the life sciences. Historians seek to understand the development of intersubjective entities like gods and nations, whereas biologists hardly recognise the existence of such things. Some believe that if we could only crack the genetic code and map every neuron in the brain, we will know all of humanity’s secrets. After all, if humans have no soul, and if thoughts, emotions and sensations are just biochemical algorithms, why can’t biology account for all the vagaries of human societies? From this perspective, the crusades were territorial disputes shaped by evolutionary pressures, and English knights going to fight Saladin in the Holy Land were not that different from wolves trying to appropriate the territory of a neighbouring pack.
The humanities, in contrast, emphasise the crucial importance of intersubjective entities, which cannot be reduced to hormones and neurons. To think historically means to ascribe real power to the contents of our imaginary stories. Of course, historians don’t ignore objective factors such as climate changes and genetic mutations, but they give much greater importance to the stories people invent and believe. North Korea and South Korea are so different from one another not because people in Pyongyang have different genes to people in Seoul, or because the north is colder and more mountainous. It’s because the north is dominated by very different fictions.
Maybe someday breakthroughs in neurobiology will enable us to explain communism and the crusades in strictly biochemical terms. Yet we are very far from that point. During the twenty-first century the border between history and biology is likely to blur not because we will discover biological explanations for historical events, but rather because ideological fictions will rewrite DNA strands; political and economic interests will redesign the climate; and the geography of mountains and rivers will give way to cyberspace. As human fictions are translated into genetic and electronic codes, the intersubjective reality will swallow up the objective reality and biology will merge with history. In the twenty-first century fiction might thereby become the most potent force on earth, surpassing even wayward asteroids and natural selection. Hence if we want to understand our future, cracking genomes and crunching numbers is hardly enough. We must also decipher the fictions that give meaning to the world.
20. The Creator: Jackson Pollock in a moment of inspiration.
20. Rudy Burckhardt, photographer. Jackson Pollock and Lee Krasner papers, c.1905–1984. Archives of American Art, Smithsonian Institution. © The Pollock–Krasner Foundation ARS, NY and DACS, London, 2016.
What kind of world did humans create?
How did humans become convinced that they not only control the world, but also give it meaning?
How did humanism – the worship of humankind – become the most important religion of all?
Animals such as wolves and chimpanzees live in a dual reality. On the one hand they are familiar with objective entities outside them, such as trees, rocks and rivers. On the other hand they are aware of subjective experiences within them, such as fear, joy and desire. Sapiens, in contrast, live in triple-layered reality. In addition to trees, rivers, fears and desires, the Sapiens world also contains stories about money, gods, nations and corporations. As history unfolded, the impact of gods, nations and corporations grew at the expense of rivers, fears and desires. There are still many rivers in the world, and people are still motivated by their fears and wishes, but Jesus Christ, the French Republic and Apple Inc. have dammed and harnessed the rivers, and have learned to shape our deepest anxieties and yearnings.
Since new twenty-first-century technologies are likely to make such fictions even more powerful, to understand our future we need to understand how stories about Christ, France and Apple have gained so much power. Humans think they make history, but history actually revolves around the web of stories. The basic abilities of individual humans have not changed much since the Stone Age. But the web of stories has grown from strength to strength, thereby pushing history from the Stone Age to the Silicon Age.
It all began about 70,000 years ago, when the Cognitive Revolution enabled Sapiens to start talking about things that existed only in their own imagination. For the ensuing 60,000 years Sapiens wove many fictional webs, but these remained small and local. The spirit of a revered ancestor worshipped by one tribe was completely unknown to its neighbours, and seashells valuable in one locality became worthless once you crossed the nearby mountain range. Stories about ancestral spirits and precious seashells still gave Sapiens a huge advantage, because they allowed hundreds and sometimes even thousands of Sapiens to cooperate effectively, which was far more than Neanderthals or chimpanzees could do. Yet as long as Sapiens remained hunter-gatherers they could not cooperate on a truly massive scale, because it was impossible to feed a city or a kingdom by hunting and gathering. Consequently the spirits, fairies and demons of the Stone Age were relatively weak entities.
The Agricultural Revolution, which began about 12,000 years ago, provided the necessary material base for enlarging and strengthening the intersubjective networks. Farming made it possible to feed thousands of people in crowded cities and thousands of soldiers in disciplined armies. However, the intersubjective webs then encountered a new obstacle. In order to preserve the collective myths and organise mass cooperation, the early farmers relied on the data-processing abilities of the human brain, which were strictly limited.
Farmers believed in stories about great gods. They built temples to their favourite god, held festivals in his honour, offered him sacrifices, and gave him lands, tithes and presents. In the first cities of ancient Sumer, about 6,000 years ago, the temples were not just centres of worship, but also the most important political and economic hubs. The Sumerian gods fulfilled a function analogous to modern brands and corporations. Today, corporations are fictional legal entities that own property, lend money, hire employees and initiate economic enterprises. In the ancient cities of Uruk, Lagash and Shurupak the gods functioned as legal entities that could own fields and slaves, give and receive loans, pay salaries and build dams and canals.
Since the gods never died, and since they had no children to quarrel over their inheritance, they gathered more and more property and power. An increasing number of Sumerians found themselves employed by the gods, taking loans from the gods, tilling the gods’ lands and owing them taxes and tithes. Just as in present-day San Francisco John is employed by Google while Mary works for Microsoft, so in ancient Uruk one person was employed by the great god Enki while his neighbour worked for the goddess Inanna. The temples of Enki and Inanna dominated the Uruk skyline, and their divine logos branded buildings, products and clothes. For the Sumerians, Enki and Inanna were as real as Google and Microsoft are real for us. Compared to their predecessors – the ghosts and spirits of the Stone Age – the Sumerian gods were very powerful entities.
It goes without saying that the gods didn’t actually run their businesses, for the simple reason that they didn’t exist anywhere except in the human imagination. Day-to-day activities were managed by the temple priests (just as Google and Microsoft need to hire flesh-and-blood humans to manage their affairs). However, as the gods acquired more and more property and power, the priests could not cope. They may have represented the mighty sky god or the all-knowing earth goddess, but they themselves were fallible mortals. They had difficulty remembering which estates, orchards and fields belonged to the goddess Inanna, which of Inanna’s employees had already received their salaries, which of the goddess’s tenants had failed to pay their rents and what interest rate the goddess charged her debtors. This was one of the main reasons why in Sumer, like everywhere else around the world, human cooperation networks could not notably expand even thousands of years after the Agricultural Revolution. There were no huge kingdoms, no extensive trade networks and no universal religions.
This obstacle was finally removed about 5,000 years ago, when the Sumerians invented both writing and money. These Siamese twins – born to the same parents at the same time and in the same place – broke the data-processing limitations of the human brain. Writing and money made it possible to start collecting taxes from hundreds of thousands of people, to organise complex bureaucracies and to establish vast kingdoms. In Sumer these kingdoms were managed in the name of the gods by human priest-kings. In the neighbouring Nile Valley people went a step further, merging the priest-king with the god to create a living deity – pharaoh.
The Egyptians considered pharaoh to be an actual god rather than just a divine deputy. The whole of Egypt belonged to that god, and all people had to obey his orders and pay the taxes he levied. Just as in the Sumerian temples, so also in pharaonic Egypt the god didn’t manage his business empire by himself. Some pharaohs ruled with an iron fist, while others passed their days at banquets and festivities, but in both cases the practical work of administering Egypt was left to thousands of literate officials. Just like any other human, pharaoh had a biological body with biological needs, desires and emotions. But the biological pharaoh was of little importance. The real ruler of the Nile Valley was an imagined pharaoh who existed in the stories that millions of Egyptians told one another.
While pharaoh sat in the capital city of Memphis, eating grapes in his palace and dallying with his wives and mistresses, pharaoh’s officials criss-crossed the kingdom from the Mediterranean shore to the Nubian Desert. The bureaucrats calculated the taxes each village had to pay, recorded them on long papyrus scrolls and sent them to Memphis. When a written order came from Memphis to recruit soldiers to the army or labourers for some construction project, officials gathered the necessary men. They computed how much wheat the royal granaries contained, how many work days were required to clean the canals and reservoirs, and how many ducks and pigs to send to Memphis so that pharaoh’s harem could dine well. Even when the living deity died, and his body was embalmed and borne in an extravagant funerary procession to the royal necropolis outside Memphis, the bureaucracy kept going. Officials kept writing scrolls, collecting taxes, sending orders and oiling the gears of the pharaonic machine.
If the Sumerian gods remind us of present-day company brands, so the living-god pharaoh can be compared to modern personal brands such as Elvis Presley, Madonna or Justin Bieber. Just like pharaoh, Elvis too had a biological body, complete with biological needs, desires and emotions. Elvis ate and drank and slept. Yet Elvis was much more than a biological body. Like pharaoh, Elvis was a story, a myth, a brand – and the brand was far more important than the biological body. During Elvis’s lifetime, the brand earned millions of dollars selling records, tickets, posters and rights, but only a small fraction of the necessary work was performed by Elvis in person. Instead, most of it was accomplished by a small army of agents, lawyers, producers and secretaries. Consequently when the biological Elvis died, for the brand it was business as usual. Even today fans still buy the King’s posters and albums, radio stations go on paying royalties, and more than half a million pilgrims flock each year to Graceland, the King’s necropolis in Memphis, Tennessee.
21. Brands are not a modern invention. Just like Elvis Presley, pharaoh too was a brand rather than a living organism. For millions of followers his image counted for far more than his fleshy reality, and they kept worshipping him long after he was dead.
21. Left: © Richard Nowitz/Getty Images. Right: © Archive Photos/Stringer/Getty Images.
Prior to the invention of writing, stories were confined by the limited capacity of human brains. You couldn’t invent overly complex stories which people couldn’t remember. But with writing you could suddenly create extremely long and intricate stories, which were stored on tablets and papyri rather than in human heads. No ancient Egyptian remembered all of pharaoh’s lands, taxes and tithes; Elvis Presley never even read all the contracts signed in his name; no living soul is familiar with all the laws and regulations of the European Union; and no banker or CIA agent tracks down every single dollar in the world. Yet all of these minutiae are written somewhere, and the assemblage of relevant documents defines the identity and power of pharaoh, Elvis, the EU and the dollar.
Writing has thus enabled humans to organise entire societies in an algorithmic fashion. We encountered the term ‘algorithm’ when we tried to understand what emotions are and how brains function, and defined it as a methodical set of steps that can be used to make calculations, resolve problems and reach decisions. In illiterate societies people make all calculations and decisions in their heads. In literate societies people are organised into networks, so that each person is only a small step in a huge algorithm, and it is the algorithm as a whole that makes the important decisions. This is the essence of bureaucracy.
Think about a modern hospital, for example. When you arrive the receptionist hands you a standard form and asks you a predetermined set of questions. Your answers are forwarded to a nurse, who compares them with the hospital’s regulations in order to decide what preliminary tests to give you. She then measures, say, your blood pressure and heart rate, and takes a blood sample. The doctor on duty examines the initial results, and follows a strict protocol in determining to which ward to admit you. In the ward you are subjected to much more thorough examinations, such as an X-ray or an fMRI scan, mandated by thick medical guidebooks. Specialists then analyse the results according to well-known statistical databases, deciding what medicines to give you or what further tests to run.
This algorithmic structure ensures that it doesn’t really matter who is the receptionist, nurse or doctor on duty. Their personality type, their political opinions and their momentary moods are irrelevant. As long as they all follow the regulations and protocols, they stand a good chance of curing you. According to the algorithmic ideal, your fate is in the hands of ‘the system’, and not in the hands of the flesh-and-blood mortals who happen to occupy this or that post.
What’s true of hospitals is also true of armies, prisons, schools, corporations – and ancient kingdoms. Of course ancient Egypt was far less technologically sophisticated than a modern hospital, but the algorithmic principle was the same. In ancient Egypt too, most decisions were made not by a single wise person, but by a network of officials linked together through papyri and stone inscriptions. Acting in the name of the living-god pharaoh, the network restructured human society and reshaped the natural world. For example, pharaohs Senusret III and his son Amenemhat III, who ruled Egypt from 1878 BC to 1814 BC, dug a huge canal linking the Nile to the swamps of the Fayum Valley. An intricate system of dams, reservoirs and subsidiary canals diverted some of the Nile waters to Fayum, creating an immense artificial lake holding 13 trillion gallons of water.1 By comparison, Lake Mead, the largest man-made reservoir in the United States (formed by the Hoover Dam), holds at most 9 trillion gallons of water.
The Fayum engineering project gave pharaoh the power to regulate the Nile, prevent destructive floods and provide precious water relief in times of drought. In addition, it turned the Fayum Valley from a crocodile-infested swamp surrounded by barren desert into Egypt’s granary. On the shore of the new artificial lake was built the new city of Shadet, which the Greeks called Crocodilopolis – the city of crocodiles. It was dominated by the temple of the crocodile god Sobek, who was identified with pharaoh (contemporary statues sometimes show pharaoh sporting a crocodile head). The temple housed a sacred crocodile called Petsuchos, who was considered to be the living incarnation of Sobek. Just like the living-god pharaoh, the living-god Petsuchos was lovingly groomed by attending priests who provided the lucky reptile with lavish food and even toys, and dressed him up in gold cloaks and gem-encrusted crowns. After all, Petsuchos was the priests’ brand, and their authority and livelihood depended on him. When Petsuchos died, a new crocodile was immediately elected to fill his sandals, while the dead reptile was carefully embalmed and mummified.
In the days of Senusret III and Amenemhat III the Egyptians had neither bulldozers nor dynamite. They didn’t even have iron tools, work horses or wheels (the wheel did not enter common usage in Egypt until about 1500 BC). Bronze tools were considered cutting-edge technology, but they were so expensive and rare that most of the building work was performed with tools made only of stone and wood, operated by human muscle power. Many people argue that the great building projects of ancient Egypt – all the dams and reservoirs and pyramids – must have been built by aliens from outer space. How else could a culture lacking even wheels and iron accomplish such wonders?
The truth is very different. Egyptians built Lake Fayum and the pyramids thanks not to extraterrestrial help, but to superb organisational skills. Relying on thousands of literate bureaucrats, pharaoh recruited tens of thousands of labourers and enough food to maintain them for years on end. When tens of thousands of labourers cooperate for several decades, they can build an artificial lake or a pyramid even with stone tools.
Pharaoh himself hardly lifted a finger, of course. He didn’t collect taxes himself, he didn’t draw any architectural plans, and he certainly never picked up a shovel. But the Egyptians believed that only prayers to the living-god pharaoh and to his heavenly patron Sobek could save the Nile Valley from devastating floods and droughts. They were right. Pharaoh and Sobek were imaginary entities who did nothing to raise or lower the Nile water level, but when millions of people believed in pharaoh and Sobek and therefore cooperated in building dams and digging canals, floods and droughts became rare. Compared to the Sumerian gods, not to mention the Stone Age spirits, the gods of ancient Egypt were truly powerful entities that founded cities, raised armies and controlled the lives of millions of humans, cows and crocodiles.
It may sound strange to credit imaginary entities with building or controlling things. But nowadays we habitually say that the United States built the first nuclear bomb, that China built the Three Gorges Dam or that Google is building an autonomous car. Why not say, then, that pharaoh built a reservoir and Sobek dug a canal?
Writing thus facilitated the appearance of powerful fictional entities that organised millions of people and reshaped the reality of rivers, swamps and crocodiles. Simultaneously, writing also made it easier for humans to believe in the existence of such fictional entities, because it habituated people to experiencing reality through the mediation of abstract symbols.
Hunter-gatherers spent their days climbing trees, looking for mushrooms, and chasing boars and rabbits. Their daily reality consisted of trees, mushrooms, boars and rabbits. Peasants worked all day in the fields, ploughing, harvesting, grinding corn and taking care of farmyard animals. Their daily reality was the feeling of muddy earth under bare feet, the smell of oxen pulling the plough and the taste of warm bread fresh from the oven. In contrast, scribes in ancient Egypt devoted most of their time to reading, writing and calculating. Their daily reality consisted of ink marks on papyrus scrolls, which determined who owned which field, how much an ox cost and what yearly taxes the peasants had to pay. A scribe could decide the fate of an entire village with a stroke of his stylus.
The vast majority of people remained illiterate until the modern age, but the all-important administrators increasingly saw reality through the medium of written texts. For this literate elite – whether in ancient Egypt or in twentieth-century Europe – anything written on a piece of paper was at least as real as trees, oxen and human beings.
In the spring of 1940, when the Nazis overran France from the north, much of its Jewish population tried to escape the country towards the south. In order to cross the border, they needed visas to Spain and Portugal, and together with a flood of other refugees, tens of thousands of Jews besieged the Portuguese consulate in Bordeaux in a desperate attempt to get that life-saving piece of paper. The Portuguese government forbade its consuls in France to issue visas without prior approval from the Foreign Ministry, but the consul in Bordeaux, Aristides de Sousa Mendes, decided to disregard the order, throwing to the wind a thirty-year diplomatic career. As Nazi tanks were closing in on Bordeaux, Sousa Mendes and his team worked around the clock for ten days and nights, barely stopping to sleep, just issuing visas and stamping pieces of paper. Sousa Mendes issued thousands of visas before collapsing from exhaustion.
22. Aristides de Sousa Mendes, the angel with the rubber stamp.
22. Courtesy of the Sousa Mendes Foundation.
The Portuguese government – which had little desire to accept any of these refugees – sent agents to escort the disobedient consul back home, and fired him from the foreign office. Yet officials who cared little for the plight of human beings nevertheless had a deep reverence for documents, and the visas Sousa Mendes issued against orders were respected by French, Spanish and Portuguese bureaucrats alike, spiriting up to 30,000 people out of the Nazi death trap. Sousa Mendes, armed with little more than a rubber stamp, was responsible for the largest rescue operation by a single individual during the Holocaust.2
The sanctity of written records often had far less positive effects. From 1958 to 1961 communist China undertook the Great Leap Forward, when Mao Zedong wished to rapidly turn China into a superpower. Intending to use surplus grain to finance ambitious industrial projects, Mao ordered the doubling and tripling of agricultural production. From the government offices in Beijing his impossible demands made their way down the bureaucratic ladder, through provincial administrators, all the way down to the village headmen. The local officials, afraid of voicing any criticism and wishing to curry favour with their superiors, concocted imaginary reports of dramatic increases in agricultural output. As the fabricated numbers made their way back up the bureaucratic hierarchy, each official exaggerated them further, adding a zero here or there with a stroke of a pen.
23. One of the thousands of life-saving visas signed by Sousa Mendes in June 1940 (visa #1902 for Lazare Censor and family, dated 17 June 1940).
23. Courtesy of the Sousa Mendes Foundation.
Consequently, in 1958 the Chinese government was informed that annual grain production was 50 per cent more than it actually was. Believing the reports, the government sold millions of tons of rice to foreign countries in exchange for weapons and heavy machinery, assuming that enough was left to feed the Chinese population. The result was the worst famine in history and the death of tens of millions of Chinese.3
Meanwhile, enthusiastic reports of China’s farming miracle reached audiences throughout the world. Julius Nyerere, the idealistic president of Tanzania, was deeply impressed by the Chinese success. In order to modernise Tanzanian agriculture, Nyerere resolved to establish collective farms on the Chinese model. When peasants objected to the plan, Nyerere sent the army and police to destroy traditional villages and forcibly relocate hundreds of thousands of peasants onto the new collective farms.
Government propaganda depicted the farms as miniature paradises, but many of them existed only in government documents. The protocols and reports written in the capital Dar es Salaam said that on such-and-such a date the inhabitants of such-and-such village were relocated to such-and-such farm. In reality, when the villagers reached their destination, they found absolutely nothing there. No houses, no fields, no tools. Officials nevertheless reported great successes to themselves and to President Nyerere. In fact, within less than ten years Tanzania was transformed from Africa’s biggest food exporter into a net food importer that could not feed itself without external assistance. In 1979, 90 per cent of Tanzanian farmers lived on collective farms, but they generated only 5 per cent of the country’s agricultural output.4
Though the history of writing is rife with similar mishaps, the benefits of a more efficient administration have generally outweighed the costs, at least from the government’s perspective. No ruler could resist the temptation of trying to alter reality with the stroke of a pen, and if disaster resulted, the remedy seemed to consist of writing ever more voluminous memos and issuing ever more codes, edicts and orders.
Written language may have been conceived as a modest way of describing reality, but it gradually became a powerful way to reshape reality. When official reports collided with objective reality, it was often reality that had to give way. Anyone who has ever dealt with the tax authorities, the educational system or any other complex bureaucracy knows that the truth hardly matters. What’s written on your form is far more important.
Is it true that when text and reality collide, reality sometimes has to give way? Isn’t that just a common but exaggerated slander of bureaucratic systems? Most bureaucrats – whether serving pharaoh or Mao Zedong – were reasonable people, and surely would have made the following argument: ‘We use writing to describe the reality of fields, canals and granaries. If the description is accurate, we make realistic decisions. If the description is inaccurate, it causes famines and even rebellions. Then we, or the administrators of some future regime, learn from that mistake, and strive to produce more truthful descriptions. So over time, our documents are bound to become ever more precise.’
That’s true to some extent, but it ignores an opposite historical dynamic. As bureaucracies accumulate power, they become immune to their own mistakes. Instead of changing their stories to fit reality, they can change reality to fit their stories. In the end external reality matches their bureaucratic fantasies, but only because they forced reality to do so. For example, the borders of many African countries disregard river lines, mountain ranges and trade routes, split historical and economic zones unnecessarily, and ignore local ethnic and religious identities. The same tribe may find itself riven among several countries, whereas one country may incorporate splinters of numerous rival clans. Such problems bedevil countries all over the world, but in Africa they are particularly acute because modern African borders don’t reflect the wishes and struggles of local nations. They were drawn by European bureaucrats who never set foot in Africa.
In the late nineteenth century, several European powers laid claim to African territories. Fearing that conflicting claims might lead to an all-out European war, the concerned parties got together in Berlin in 1884 and divided Africa as if it were a pie. Back then much of the African interior was terra incognita to Europeans. The British, French and Germans had accurate maps of Africa’s coastal regions, and knew precisely where the Niger, Congo and Zambezi empty into the ocean. However, they knew little about the course these rivers took inland, about the kingdoms and tribes that lived along their banks, and about local religion, history and geography. This hardly mattered to the European diplomats. They unrolled a half-empty map of Africa over a well-polished Berlin table, sketched a few lines here and there, and divided the continent among them.
When in due course the Europeans penetrated the African interior, armed with their agreed-upon map, they discovered that many of the borders drawn in Berlin did little justice to the geographic, economic and ethnic reality of Africa. However, to avoid renewed clashes, the invaders stuck to their agreements, and these imaginary lines became the actual borders of European colonies. During the second half of the twentieth century, as the European empires disintegrated and their colonies gained independence, the new countries accepted the colonial borders, fearing that the alternative would be endless wars and conflicts. Many of the difficulties faced by present-day African countries stem from the fact that their borders make little sense. When the written fantasies of European bureaucracies encountered the African reality, reality was forced to surrender.5
Our modern education systems provide numerous other examples of reality kowtowing to written records. When measuring the width of my desk, the yardstick I am using matters little. The width of my desk remains the same whether I say it is 200 centimetres or 78.74 inches. However, when bureaucracies measure people, the yardsticks they choose make all the difference. When schools began assessing people according to precise numerical marks, the lives of millions of students and teachers changed dramatically. Marks are a relatively new invention. Hunter-gatherers were never marked for their achievements, and even thousands of years after the Agricultural Revolution, few education establishments used precise marks. At the end of the year a medieval apprentice cobbler did not receive a piece of paper saying he has got an A in shoelaces but a C minus in buckles. An undergraduate in Shakespeare’s day left Oxford with one of only two possible results – with a degree, or without one. Nobody thought of giving one student a final mark of 74 and another student an 88.6
24. A mid-nineteenth century European map of Africa. Europeans knew very little about the African interior, but that did not prevent them from divvying up the continent and drawing its borders.
24. © Antiqua Print Gallery/Alamy Stock Photo.
It was the mass education systems of the industrial age that began using precise marks on a regular basis. After both factories and government ministries became accustomed to thinking in the language of numbers, schools followed suit. They started to gauge the worth of each student according to his or her average mark, whereas the worth of each teacher and principal was judged according to the school’s overall average. Once bureaucrats adopted this yardstick, reality was transformed.
Originally, schools were supposed to focus on enlightening and educating students, and marks were merely a means of measuring success. But naturally enough schools soon began focusing on achieving high marks. As every child, teacher and inspector knows, the skills required to get high marks in an exam are not the same as a true understanding of literature, biology or mathematics. Every child, teacher and inspector also knows that when forced to choose between the two, most schools will go for the marks.
The power of written records reached its apogee with the appearance of holy scriptures. Priests and scribes in ancient civilisations became accustomed to seeing documents as guidebooks for reality. At first, the texts told them about the reality of taxes, fields and granaries. But as the bureaucracy gained power, so the texts gained authority. Priests recorded not only lists of the god’s property, but also the god’s deeds, commandments and secrets. The resulting scriptures purported to describe reality in its entirety, and generations of scholars became accustomed to looking for all the answers in the pages of the Bible, the Qur’an or the Vedas.
In theory, if some holy book misrepresented reality, its disciples would sooner or later discover this, and the text’s authority would be undermined. Abraham Lincoln said you cannot deceive everybody all the time. Well, that’s wishful thinking. In practice, the power of human cooperation networks depends on a delicate balance between truth and fiction. If you distort reality too much, it will weaken you, and you will not be able to compete against more clear-sighted rivals. On the other hand, you cannot organise masses of people effectively without relying on some fictional myths. So if you stick to unalloyed reality, without mixing any fiction with it, few people will follow you.
If you used a time machine to send a modern scientist to ancient Egypt, she would not be able to seize power by exposing the fictions of the local priests and lecturing the peasants on evolution, relativity and quantum physics. Of course, if our scientist could use her knowledge in order to produce a few rifles and artillery pieces, she could gain a huge advantage over pharaoh and the crocodile god Sobek. Yet in order to mine iron ore, build blast furnaces and manufacture gunpowder the scientist would need a lot of hard-working peasants. Do you really think she could inspire them by explaining that energy divided by mass equals the speed of light squared? If you happen to think so, you are welcome to travel to present-day Afghanistan or Syria and try your luck.
Really powerful human organisations – such as pharaonic Egypt, the European empires and the modern school system – are not necessarily clear-sighted. Much of their power rests on their ability to force their fictional beliefs on a submissive reality. That’s the whole idea of money, for example. The government makes worthless pieces of paper, declares them to be valuable and then uses them to compute the value of everything else. The government has the power to force citizens to pay taxes using these pieces of paper, so the citizens have no choice but to get their hands on at least some of them. Consequently, these bills really do become valuable, the government officials are vindicated in their beliefs, and since the government controls the issuing of paper money, its power grows. If somebody protests that ‘These are just worthless pieces of paper!’ and behaves as if they are only pieces of paper, he won’t get very far in life.
The same thing happens when the education system declares that matriculation exams are the best method to evaluate students. The system has sufficient authority to influence admission standards to colleges and hiring standards in government offices and in the private sector. Students therefore invest all their efforts in getting good marks. Coveted positions are occupied by people with high marks, who naturally support the system that brought them there. The fact that the education system controls the critical exams gives it more power, and increases its influence over colleges, government offices and the job market. If somebody protests that ‘The degree certificate is just a piece of paper!’ and behaves accordingly, he is unlikely to get very far in life.
Holy scriptures work the same way. The religious establishment proclaims that the holy book contains the answers to all our questions. It simultaneously presses courts, governments and businesses to behave according to what the holy book says. When a wise person reads scriptures and then looks at the world, he sees that there is indeed a good match. ‘Scriptures say that you must pay tithes to God – and look, everybody pays. Scriptures say that women are inferior to men, and cannot serve as judges or even give testimony in court – and look, there are indeed no women judges and the courts reject their testimony. Scriptures say that whoever studies the word of God will succeed in life – and look, all the good jobs are indeed held by people who know the holy book by heart.’
Such a wise person will naturally begin to study the holy book, and because he is wise, he will become a scriptural pundit and be appointed a judge. When he becomes a judge, he will not allow women to bear witness in court, and when he chooses his successor, he will obviously pick somebody who knows the holy book well. If someone protests that ‘This book is just paper!’ and behaves accordingly, such a heretic will not get very far in life.
Even when scriptures mislead people about the true nature of reality, they can nevertheless retain their authority for thousands of years. For instance, the biblical perception of history is fundamentally flawed, yet it managed to spread throughout the world, and many millions still believe in it. The Bible peddled a monotheistic theory of history, claiming that the world is governed by a single all-powerful deity, who cares above all else about me and my doings. If something good happens, it must be a reward for my good deeds. Any catastrophe must surely be punishment for my sins.
Thus the ancient Jews believed that if they suffered from drought, or if King Nebuchadnezzar of Babylonia invaded Judaea and exiled its people, surely these were divine punishments for their own sins. And if King Cyrus of Persia defeated the Babylonians and allowed the Jewish exiles to return home and rebuild Jerusalem, God in his mercy must have heard their remorseful prayers. The Bible doesn’t recognise the possibility that perhaps the drought resulted from a volcanic eruption in the Philippines, that Nebuchadnezzar invaded in pursuit of Babylonian commercial interests and that King Cyrus had his own political reasons to favour the Jews. The Bible accordingly shows no interest whatsoever in understanding the global ecology, the Babylonian economy or the Persian political system.
Such self-absorption characterises all humans in childhood. Children of all religions and cultures think they are the centre of the world, and therefore show little genuine interest in the conditions and feelings of other people. That’s why divorce is so traumatic for children. A five-year-old cannot understand that something important is happening for reasons unrelated to him. No matter how many times mommy and daddy tell him that they are independent people with their own problems and wishes, and that they didn’t divorce because of him – the child cannot absorb it. He is convinced that everything happens because of him. Most people grow out of this infantile delusion. Monotheists hold on to it till the day they die. Like a child thinking that his parents are fighting because of him, the monotheist is convinced that the Persians are fighting the Babylonians because of him.
Already in biblical times some cultures had a far more accurate perception of history. Animist and polytheist religions depicted the world as the playground of numerous different powers rather than a single god. It was consequently easy for animists and polytheists to accept that many events are unrelated to me or to my favourite deity, and they are neither punishments for my sins nor rewards for my good deeds. Greek historians such as Herodotus and Thucydides, and Chinese historians such as Sima Qian, developed sophisticated theories of history that are very similar to our own modern views. They explained that wars and revolutions break out due to myriad political, social and economic factors. People may fall victim to war through no fault of their own. Accordingly, Herodotus developed a keen interest in understanding Persian politics, while Sima Qian was very concerned about the culture and religion of barbarous steppe people.7
Present-day scholars agree with Herodotus and Sima Qian rather than with the Bible. That’s why all modern states invest so much effort in collecting information about other countries, and in analysing global ecological, political and economic trends. When the US economy falters, even evangelical Republicans sometimes point an accusing finger at China rather than at their own sins.
Yet even though Herodotus and Thucydides understood reality much better than the authors of the Bible, when the two world views collided, the Bible won by a knockout. The Greeks adopted the Jewish view of history, rather than vice versa. A thousand years after Thucydides, the Greeks became convinced that if some barbarian horde invaded, surely it was divine punishment for their sins. No matter how mistaken the biblical world view was, it provided a better basis for large-scale human cooperation.
Indeed, even today when US presidents take their oath of office, they put their hand on a Bible. Similarly in many countries around the world, including the USA and the UK, witnesses in courts put their hand on a Bible when swearing to tell the truth, the whole truth and nothing but the truth. It’s ironic that they swear to tell the truth on a book brimming with so many fictions, myths and errors.
Fictions enable us to cooperate better. The price we pay is that the same fictions also determine the goals of our cooperation. So we may have very elaborate systems of cooperation, which are harnessed to serve fictional aims and interests. Consequently the system may seem to be working well, but only if we adopt the system’s own criteria. For example, a Muslim mullah would say: ‘Our system works. There are now 1.5 billion Muslims worldwide, and more people study the Qur’an and submit themselves to Allah’s will than ever before.’ The key question, though, is whether this is the right yardstick for measuring success. A school principal would say: ‘Our system works. During the last five years, exam results have risen by 7.3 per cent.’ Yet is that the best way to judge a school? An official in ancient Egypt would say: ‘Our system works. We collect more taxes, dig more canals and build bigger pyramids than anyone else in the world.’ True enough, pharaonic Egypt led the world in taxation, irrigation and pyramid construction. But is that what really counts?
People have many material, social and psychological needs. It is far from clear that peasants in ancient Egypt enjoyed more love or better social relations than their hunter-gatherer ancestors, and in terms of nutrition, health and child mortality it seems that life was actually worse. A document dated c.1850 BC from the reign of Amenemhat III – the pharaoh who created Lake Fayum – tells of a well-to-do man called Dua-Khety who took his son Pepy to school, so that he could learn to be a scribe. While on their way, Dua-Khety portrayed the miserable life of peasants, labourers, soldiers and artisans, so as to encourage Pepy to devote all his energy to studying and thereby escape the unhappy destiny of most humans.
According to Dua-Khety, the life of a landless field labourer is full of hardship and misery. Dressed in mere tatters, he works all day till his fingers are covered in blisters. Then pharaoh’s officials come and take him away to do forced labour. In return for all his hard work he receives only sickness as payment. Even if he makes it home alive, he will be completely worn out and ruined. The fate of the landholding peasant is hardly better. He spends his days carrying water in buckets from the river to the field. The heavy load bends his shoulders and covers his neck with festering swellings. In the morning he has to water his plot of leeks, in the afternoon his date palms and in the evening his coriander field. Eventually he drops down and dies.8 The text might be exaggerating things on purpose, but not by much. Pharaonic Egypt was the most powerful kingdom of its day, but for the simple peasant all that power meant taxes and forced labour rather than clinics and social security services.
This was not a uniquely Egyptian defect. Despite all the immense achievements of the Chinese dynasties, the Muslim empires and the European kingdoms, even in ad 1850 the life of the average person was not better – and might actually have been worse – than the lives of archaic hunter-gatherers. In 1850 a Chinese peasant or a Manchester factory hand worked longer hours than their hunter-gatherer ancestors; their jobs were physically harder and mentally less fulfilling; their diet was less balanced; hygiene conditions were incomparably worse; and infectious diseases were far more common.
Suppose you were given a choice between the following two vacation packages:
Stone Age package: On day one we will hike for ten hours in a pristine forest, setting camp for the night in a clearing by a river. On day two we will canoe down the river for ten hours, camping on the shores of a small lake. On day three we will learn from the native people how to fish in the lake and how to find mushrooms in the nearby woods.
Modern proletarian package: On day one we will work for ten hours in a polluted textile factory, passing the night in a cramped apartment block. On day two we will work for ten hours as cashiers in the local department store, going back to sleep in the same apartment block. On day three we will learn from the native people how to open a bank account and fill out mortgage forms.
Which package would you choose?
Hence when we come to evaluate human cooperation networks, it all depends on the yardsticks and viewpoint we adopt. Are we judging pharaonic Egypt in terms of production, nutrition or perhaps social harmony? Do we focus on the aristocracy, the simple peasants, or the pigs and crocodiles? History isn’t a single narrative, but thousands of alternative narratives. Whenever we choose to tell one, we are also choosing to silence others.
Human cooperative networks usually judge themselves by yardsticks of their own invention and, not surprisingly, they often give themselves high marks. In particular, human networks built in the name of imaginary entities such as gods, nations and corporations normally judge their success from the viewpoint of the imaginary entity. A religion is successful if it follows divine commandments to the letter; a nation is glorious if it promotes the national interest; and a corporation thrives if it makes a lot of money.
When examining the history of any human network, it is therefore advisable to stop from time to time and look at things from the perspective of some real entity. How do you know if an entity is real? Very simple – just ask yourself, ‘Can it suffer?’ When people burn down the temple of Zeus, Zeus doesn’t suffer. When the euro loses its value, the euro doesn’t suffer. When a bank goes bankrupt, the bank doesn’t suffer. When a country suffers a defeat in war, the country doesn’t really suffer. It’s just a metaphor. In contrast, when a soldier is wounded in battle, he really does suffer. When a famished peasant has nothing to eat, she suffers. When a cow is separated from her newborn calf, she suffers. This is reality.
Of course suffering might well be caused by our belief in fictions. For example, belief in national and religious myths might cause the outbreak of war, in which millions lose their homes, their limbs and even their lives. The cause of war is fictional, but the suffering is 100 per cent real. This is exactly why we should strive to distinguish fiction from reality.
Fiction isn’t bad. It is vital. Without commonly accepted stories about things like money, states or corporations, no complex human society can function. We can’t play football unless everyone believes in the same made-up rules, and we can’t enjoy the benefits of markets and courts without similar make-believe stories. But the stories are just tools. They should not become our goals or our yardsticks. When we forget that they are mere fiction, we lose touch with reality. Then we begin entire wars ‘to make a lot of money for the corporation’ or ‘to protect the national interest’. Corporations, money and nations exist only in our imagination. We invented them to serve us; why do we find ourselves sacrificing our lives in their service?
In the twenty-first century we will create more powerful fictions and more totalitarian religions than in any previous era. With the help of biotechnology and computer algorithms these religions will not only control our minute-by-minute existence, but will be able to shape our bodies, brains and minds, and to create entire virtual worlds complete with hells and heavens. Being able to distinguish fiction from reality and religion from science will therefore become more difficult but more vital than ever before.
Stories serve as the foundations and pillars of human societies. As history unfolded, stories about gods, nations and corporations grew so powerful that they began to dominate objective reality. Believing in the great god Sobek, the Mandate of Heaven or the Bible enabled people to build Lake Fayum, the Great Wall of China and Chartres Cathedral. Unfortunately, blind faith in these stories meant that human efforts frequently focused on increasing the glory of fictional entities such as gods and nations, instead of bettering the lives of real sentient beings.
Does this analysis still hold true today? At first sight, it seems that modern society is very different from the kingdoms of ancient Egypt or medieval China. Hasn’t the rise of modern science changed the basic rules of the human game? Wouldn’t it be true to say that despite the ongoing importance of traditional myths, modern social systems increasingly rely on objective scientific theories such as the theory of evolution, which simply did not exist in ancient Egypt or medieval China?
We could of course argue that scientific theories are a new kind of myth, and that our belief in science is no different from the ancient Egyptians’ belief in the great god Sobek. Yet the comparison doesn’t hold water. Sobek existed only in the collective imagination of his devotees. True, praying to Sobek helped cement the Egyptian social system, thereby enabling people to build dams and canals that prevented floods and droughts. Yet the prayers themselves didn’t raise or lower the Nile’s water level in the slightest. In contrast, scientific theories are not just a way to bind people together. It is often said that God helps those who help themselves. This is a roundabout way of saying that God doesn’t exist, but if our belief in Him inspires us to do something ourselves – it helps. Antibiotics, unlike God, help even those who don’t help themselves. They cure infections whether you believe in them or not.
Consequently, the modern world is very different from the premodern world. Egyptian pharaohs and Chinese emperors failed to overcome famine, plague and war despite millennia of effort. Modern societies managed to do it within a few centuries. Isn’t this the fruit of abandoning intersubjective myths in favour of objective scientific knowledge? And can we not expect this process to accelerate in the coming decades? As technology enables us to upgrade humans, overcome old age and find the key to happiness, won’t people care less about fictional gods, nations and corporations, and focus instead on deciphering the physical and biological reality?
It might seem so, but in fact things are far more complicated. Modern science certainly changed the rules of the game, yet it did not simply replace myths with facts. Myths continue to dominate humankind, and science only makes these myths stronger. Instead of destroying the intersubjective reality, science will enable it to control the objective and subjective realities more completely than ever before. Thanks to computers and bioengineering, the difference between fiction and reality will blur, as people reshape reality to match their pet fictions.
The priests of Sobek imagined the existence of divine crocodiles, while pharaoh dreamt about immortality. In reality the sacred crocodile was a very ordinary swamp reptile dressed in golden finery, and pharaoh was as mortal as the poorest peasant. After death his corpse was mummified using preservative balms and scented perfumes, but it nonetheless remained as lifeless as one can get. In contrast, twenty-first-century scientists might be able to engineer actual super-crocodiles, and to provide the human elite with eternal youth here on earth.
Consequently the rise of science will make at least some myths and religions mightier than ever. To understand why, and to face the challenges of the twenty-first century, we should therefore revisit one of the most vexing questions of all: how does modern science relate to religion? It seems that people have already said a million times everything there is to say about this question. Yet in practice, science and religion are like a husband and wife who after 500 years of marriage counselling still don’t know each other. He still dreams about Cinderella and she keeps pining for Prince Charming, while they argue about whose turn it is to take out the rubbish.
Most of the misunderstandings regarding science and religion result from faulty definitions of religion. All too often people confuse religion with superstition, spirituality, belief in supernatural powers or belief in gods. Religion is none of these things. Religion cannot be equated with superstition, because most people are unlikely to call their most cherished beliefs ‘superstitions’. We always believe in ‘the truth’; only other people believe in superstitions.
Similarly, few people put their faith in supernatural powers. For those who believe in demons, spirits and fairies, these beings are not supernatural. They are an integral part of nature, just like porcupines, scorpions and germs. Modern physicians blame disease on invisible germs, and voodoo priests blame disease on invisible spirits. There’s nothing supernatural about it: if you make some spirit angry, the spirit enters your body and causes you pain. What could be more natural than that? Only those who don’t believe in spirits think of them as standing apart from the natural order of things.
Equating religion with faith in supernatural powers implies that you can understand all known natural phenomena without religion, which is just an optional supplement. Having understood perfectly well the whole of nature, you can now choose whether or not to add some ‘super-natural’ religious dogma. Most religions, however, argue that you simply cannot understand the world without them. You will never comprehend the true reason for disease, drought or earthquakes if you do not take their dogma into account.
Defining religion as ‘belief in gods’ is also problematic. We tend to say that a devout Christian is religious because she believes in God, whereas a fervent communist isn’t religious, because communism has no gods. However, religion is created by humans rather than by gods, and it is defined by its social function rather than by the existence of deities. Religion is any all-encompassing story that confers superhuman legitimacy on human laws, norms and values. It legitimises human social structures by arguing that they reflect superhuman laws.
Religion asserts that we humans are subject to a system of moral laws that we did not invent and that we cannot change. A devout Jew would say that this is the system of moral laws created by God and revealed in the Bible. A Hindu would say that Brahma, Vishnu and Shiva created the laws, which were revealed to us humans in the Vedas. Other religions, from Buddhism and Daoism to communism, Nazism and liberalism, argue that the so-called superhuman laws are natural laws, and not the creation of this or that god. Of course, each believes in a different set of natural laws discovered and revealed by different seers and prophets, from Buddha and Laozi to Marx and Hitler.
A Jewish boy comes to his father and asks, ‘Dad, why shouldn’t we eat pork?’ The father thoughtfully strokes his long curly beard and answers, ‘Well, Yankele, that’s how the world works. You are still young and don’t yet understand, but if we eat pork, God will punish us and we will come to a bad end. It isn’t my idea. It’s not even the rabbi’s idea. If the rabbi had created the world, maybe he would have created a world in which pork was perfectly kosher. But the rabbi didn’t create the world – God did. And God said, I don’t know why, that we shouldn’t eat pork. So we shouldn’t. Capeesh?’
In 1943 a German boy comes to his father, a senior SS officer, and asks, ‘Dad, why are we killing the Jews?’ The father, putting on his shiny leather boots, explains, ‘Well, Fritz, that’s how the world works. You are still young and don’t yet understand, but if we allow the Jews to live they will cause the degeneration and extinction of humankind. It’s not my idea. And it’s not even the Führer’s idea. If Hitler had created the world, maybe he would have created a world in which the laws of natural selection did not apply, and Jews and Aryans could all live together in perfect harmony. But Hitler didn’t create the world. He just managed to decipher the laws of nature, and then instructed us how to live in line with them. If we disobey these laws, we will come to a bad end. Ist das klar?!’
In 2016 a British boy comes to his father, a liberal MP, and asks, ‘Dad, why should we care about the human rights of Muslims in the Middle East?’ The father puts down his cup of tea, thinks for a moment, and says, ‘Well, Duncan, that’s how the world works. You are still young and don’t yet understand, but all humans, even Muslims in the Middle East, have the same nature and therefore enjoy the same natural rights. This isn’t my idea, nor a decision of Parliament. If Parliament had created the world, universal human rights might well have been buried in some subcommittee along with all that quantum physics stuff. But Parliament didn’t create the world, it just tries to make sense of it, and we must respect the natural rights even of Muslims in the Middle East, or very soon our own rights will also be violated, and we will come to a bad end. Now off you go.’
Liberals, communists and followers of other modern creeds dislike describing their own system as a ‘religion’, because they identify religion with superstitions and supernatural powers. If you tell communists or liberals that they are religious, they think you are accusing them of blindly believing in groundless pipe dreams. In fact, it means only that they believe in some system of moral laws that wasn’t invented by humans, but that humans must nevertheless obey. As far as we know, all human societies believe in this. Every society tells its members that they must obey some superhuman moral law, and that breaking this law will result in catastrophe.
Religions differ of course in the details of their stories, their concrete commandments, and the rewards and punishments they promise. Thus in medieval Europe the Catholic Church argued that God doesn’t like rich people. Jesus said that it is easier for a camel to pass through the eye of a needle than for a rich man to pass through the gates of heaven. To help rich people enter God’s kingdom, the Church encouraged them to give lots of alms, threatening that misers will burn in hell. Modern communism also dislikes rich people, but it threatens them with class conflict here and now, rather than with burning sulphur after death.
The communist laws of history are similar to the commandments of the Christian God, inasmuch as they are superhuman forces that humans cannot change at will. Humans can decide tomorrow morning to cancel the offside rule in football, because we invented that law and are free to change it. However, at least according to Marx, we cannot change the laws of history. No matter what the capitalists do, as long as they continue to accumulate private property they are bound to create class conflict and are destined to be defeated by the rising proletariat.
If you happen to be a communist yourself, you might argue that communism and Christianity are nevertheless very different because communism is right, whereas Christianity is wrong. Class conflict really is inherent in the capitalist system, but the rich don’t in fact suffer eternal tortures in hell after they die. Yet even if that’s the case, it doesn’t mean communism is not a religion. Rather, it means that communism is the one true religion. Followers of every religion are convinced that theirs alone is true. Perhaps the followers of one religion are correct.
The assertion that religion is a tool for preserving social order and for organising large-scale cooperation may vex those for whom it represents first and foremost a spiritual path. However, just as the gap between religion and science is narrower than we commonly think, so the gap between religion and spirituality is much wider. Religion is a deal, whereas spirituality is a journey.
Religion gives a complete description of the world, and offers us a well-defined contract with predetermined goals. ‘God exists. He told us to behave in certain ways. If you obey God, you’ll be admitted to heaven. If you disobey Him, you’ll burn in hell.’ The very clarity of this deal allows society to define common norms and values that regulate human behaviour.
Spiritual journeys are nothing like that. They usually take people in mysterious ways towards unknown destinations. The quest usually begins with some big question, such as who am I? What is the meaning of life? What is good? Whereas most people just accept the ready-made answers provided by the powers that be, spiritual seekers are not so easily satisfied. They are determined to follow the big question wherever it leads, and not just to places they know well or wish to visit. Thus for most people, academic studies are a deal rather than a spiritual journey, because they take us to a predetermined goal approved by our elders, governments and banks. ‘I’ll study for three years, pass the exams, get my BA certificate and secure a well-paid job.’ Academic studies might be transformed into a spiritual journey if the big questions you encounter on the way deflect you towards unexpected destinations, of which you could hardly even conceive at first. For example, a student might begin to study economics in order to secure a job on Wall Street. However, if what she learns somehow induces her to end up in a Hindu ashram or helping HIV patients in Zimbabwe, then we could call that a spiritual journey.
Why label such a voyage ‘spiritual’? This is a legacy from ancient dualist religions that believed in the existence of two gods, one good and one evil. According to dualism, the good god created pure and everlasting souls that lived in a blissful world of spirit. However, the evil god – sometimes named Satan – created another world, made of matter. Satan didn’t know how to make his creation endure, hence in the world of matter everything rots and disintegrates. In order to breathe life into his defective creation, Satan tempted souls from the pure world of spirit, and confined them inside material bodies. That’s what a human is – a good spiritual soul trapped inside an evil material body. Since the soul’s prison – the body – decays and eventually dies, Satan ceaselessly tempts the soul with bodily delights, and above all with food, sex and power. When the body disintegrates and the soul has the opportunity to escape back to the spiritual world, its craving for bodily pleasures lures it back inside some new material body. The soul thus transmigrates from body to body, wasting its days in pursuit of food, sex and power.
Dualism instructs people to break these material shackles and undertake a journey back to the spiritual world, which is totally unfamiliar to us, but is our true home. During this quest we must reject all material temptations and deals. Due to this dualist legacy, every journey on which we doubt the conventions and deals of the mundane world and venture forth towards an unknown destination is called a ‘spiritual’ journey.
Such journeys are fundamentally different from religions, because religions seek to cement the worldly order whereas spirituality seeks to escape it. Often enough, one of the most important obligations for spiritual wanderers is to challenge the beliefs and conventions of dominant religions. In Zen Buddhism it is said that ‘If you meet the Buddha on the road, kill him.’ Which means that if while walking on the spiritual path you encounter the rigid ideas and fixed laws of institutionalised Buddhism, you must free yourself from them too.
For religions, spirituality is a dangerous threat. Religions typically strive to rein in the spiritual quests of their followers, and many religious systems have been challenged not by laypeople preoccupied with food, sex and power, but rather by spiritual truth-seekers who expected more than platitudes. Thus the Protestant revolt against the authority of the Catholic Church was ignited not by hedonistic atheists but rather by a devout and ascetic monk, Martin Luther. Luther wanted answers to the existential questions of life, and refused to settle for the rites, rituals and deals offered by the Church.
In Luther’s day, the Church promised its followers some very enticing deals indeed. If you sinned, and feared eternal damnation in the afterlife, all you needed to do was open your purse and buy an indulgence. In the early sixteenth century the Church employed professional ‘salvation peddlers’ who wandered the towns and villages of Europe and sold indulgences for fixed prices. You want an entry visa to heaven? Pay ten gold coins. You want your dead Grandpa Heinz and Grandma Gertrud to join you there? No problem, but it will cost you thirty coins. The most famous of these peddlers, the Dominican friar Johannes Tetzel, allegedly said that the moment the coin clinks in the money chest, the soul flies out of purgatory to heaven.1
25. The Pope selling indulgences for money (from a Protestant pamphlet).
25. Woodcut from ‘Passional Christi und Antichristi’ by Philipp Melanchthon, published in 1521, Cranach, Lucas (1472–1553) (studio of) © Private Collection/Bridgeman Images.
The more Luther thought about it, the more he doubted this deal, and the Church that offered it. You cannot just buy your way to salvation. The Pope couldn’t possibly have the authority to forgive people their sins, and open the gates of heaven. According to Protestant tradition, on 31 October 1517 Luther walked to the All Saints’ Church in Wittenberg, carrying a lengthy document, a hammer and some nails. The document listed ninety-five theses against contemporary religious practices, including against the selling of indulgences. Luther nailed it to the church door, sparking the Protestant Reformation, which called upon any Christian who cared about salvation to rebel against the Pope’s authority and search for alternative routes to heaven.
From a historical perspective, the spiritual journey is always tragic, for it is a lonely path fit only for individuals rather than for entire societies. Human cooperation requires firm answers rather than just questions, and those who fume against stultified religious structures often end up forging new structures in their place. It happened to the dualists, whose spiritual journeys became religious establishments. It happened to Martin Luther, who after challenging the laws, institutions and rituals of the Catholic Church found himself writing new law books, founding new institutions and inventing new ceremonies. It happened even to Buddha and Jesus. In their uncompromising quest for the truth they subverted the laws, rituals and structures of traditional Hinduism and Judaism. But eventually more laws, more rituals and more structures were created in their names than in the name of any other person in history.
Now that we have a better understanding of religion, we can go back to examining the relationship between religion and science. There are two extreme interpretations for this relationship. One view says that science and religion are sworn enemies, and that modern history was shaped by the life-and-death struggle of scientific knowledge against religious superstition. In time, the light of science dispelled the darkness of religion, and the world became increasingly secular, rational and prosperous. However, though some scientific findings certainly undermine religious dogmas, this is not inevitable. For example, Muslim dogma holds that Islam was founded by the prophet Muhammad in seventh-century Arabia, and there is ample scientific evidence supporting this.
More importantly, science always needs religious assistance in order to create viable human institutions. Scientists study how the world functions, but there is no scientific method for determining how humans ought to behave. Science tells us that humans cannot survive without oxygen. However, is it okay to execute criminals by asphyxiation? Science doesn’t know how to answer such a question. Only religions provide us with the necessary guidance.
Hence every practical project scientists undertake also relies on religious insights. Take, for instance, the construction of the Three Gorges Dam on the Yangtze River. When the Chinese government decided in 1992 to build the dam, physicists could calculate what pressures the dam would have to withstand, economists could forecast how much money it would probably cost, while electrical engineers could predict how much electricity it would produce. However the government needed to take additional factors into account. Building the dam flooded more than 200 square miles containing many villages and towns, thousands of archaeological sites, and unique landscapes and habitats. More than 1 million people were displaced and hundreds of species were endangered. It seems that the dam directly caused the extinction of the Chinese river dolphin. No matter what you personally think about the Three Gorges Dam, it is clear that its construction was an ethical rather than a purely scientific issue. No physics experiment, no economic model and no mathematical equation can determine whether generating thousands of megawatts and making billions of yuan is more valuable than saving an ancient pagoda or the Chinese river dolphin. Consequently China cannot function on the basis of scientific theories alone. It requires some religion or ideology, too.
Some jump to the opposite extreme, and say that science and religion are completely separate kingdoms. Science studies facts, religion speaks about values, and never the twain shall meet. Religion has nothing to say about scientific facts, and science should keep its mouth shut concerning religious convictions. If the Pope believes that human life is sacred, and abortion is therefore a sin, biologists can neither prove nor refute this claim. As a private individual, each biologist is welcome to argue with the Pope. But as a scientist, the biologist cannot enter the fray.
This approach may sound sensible, but it misunderstands religion. Though science indeed deals only with facts, religion never confines itself to ethical judgements. Religion cannot provide us with any practical guidance unless it makes some factual claims too, and here it may well collide with science. The most important segments of many religious dogmas are not their ethical principles, but rather factual statements such as ‘God exists’, ‘the soul is punished for its sins in the afterlife’, ‘the Bible was written by a deity rather than by humans’, ‘the Pope is never wrong’. These are all factual claims. Many of the most heated religious debates, and many of the conflicts between science and religion, involve such factual claims rather than ethical judgements.
Take abortion, for example. Devout Christians often oppose abortion, whereas many liberals support it. The main bone of contention is factual rather than ethical. Both Christians and liberals believe that human life is sacred, and that murder is a heinous crime. But they disagree about certain biological facts: does human life begin at the moment of conception, at the moment of birth or at some intermediate point? Indeed, some human cultures maintain that life doesn’t begin even at birth. According to the !Kung of the Kalahari Desert and to various Inuit groups in the Arctic, human life begins only after a baby is given a name. When an infant is born the family waits for some time before naming it. If they decide not to keep the baby (either because it suffers from some deformity or because of economic difficulties), they kill it. Provided they do so before the naming ceremony, this is not considered murder.2 People from such cultures might well agree with liberals and Christians that human life is sacred and that murder is a terrible crime, yet condone infanticide.
When religions advertise themselves, they tend to emphasise their beautiful values. But God often hides in the fine print of factual statements. The Catholic religion markets itself as the religion of universal love and compassion. How wonderful! Who can object to that? Why, then, are not all humans Catholic? Because when you read the fine print, you discover that Catholicism also demands blind obedience to a pope ‘who never makes mistakes’ even when he orders his followers to go on crusades and burn heretics at the stake. Such practical instructions are not deduced solely from ethical judgements. Rather, they result from conflating ethical judgements with factual statements.
When we descend from the ethereal sphere of philosophy and observe historical realities, we find that religious stories almost always include three parts:
1. Ethical judgements, such as ‘human life is sacred’.
2. Factual statements, such as ‘human life begins at the moment of conception’.
3. A conflation of the ethical judgements with the factual statements, resulting in practical guidelines such as ‘you should never allow abortion, even a single day after conception’.
Science has no ability to refute or corroborate the ethical judgements religions make. But scientists do have a lot to say about religious factual statements. Biologists are more qualified than priests to answer factual questions such as ‘Do human fetuses have a nervous system one week after conception? Can they feel pain?’
To make things clearer let us examine in depth a real historical example that you rarely hear about in religious commercials, but that had a huge social and political impact in its time. In medieval Europe the popes enjoyed far-reaching political authority. Whenever a conflict erupted anywhere in Europe, they claimed the authority to decide the issue. To establish their claim to authority, they repeatedly reminded Europeans of the Donation of Constantine. According to this story, on 30 March 315 the Roman emperor Constantine signed an official decree granting Pope Sylvester I and his heirs perpetual control of the western part of the Roman Empire. The popes kept this precious document in their archive, and used it as a powerful propaganda tool whenever they faced opposition from ambitious princes, quarrelsome cities or rebellious peasants.
People in medieval Europe had great respect for ancient imperial decrees, and believed that the older the document, the more authority it carried. They also strongly believed that kings and emperors were God’s representatives. Constantine in particular was revered, because he turned the Roman Empire from a pagan realm into a Christian empire. In a clash between the desires of some present-day city council and a decree issued by the great Constantine himself, it was obvious to medieval Europeans that the ancient document ought to be obeyed. Hence whenever the Pope faced political opposition, he waved the Donation of Constantine, demanding obedience. Not that it always worked. But the Donation of Constantine was an important cornerstone of papal propaganda and of the medieval political order.
When we examine the Donation of Constantine closely, we find that this story is composed of three distinct parts:
Ethical judgement People ought to respect ancient imperial decrees more than present-day popular opinions. |
Factual statement On 30 March 315, Emperor Constantine granted the popes dominion over Europe. |
Practical guideline Europeans in 1315 ought to obey the Pope’s commands. |
The ethical authority of ancient imperial decrees is far from self-evident. Most twenty-first-century Europeans think that the wishes of present-day citizens trump the diktats of long-dead monarchs. However, science cannot join this ethical debate, because no experiment or equation can decide the matter. If a modern-day scientist time-travelled 700 years back in time, she couldn’t prove to medieval Europeans that the decrees of ancient emperors are irrelevant to contemporary political disputes.
Yet the story of Constantine’s Donation was based not just on ethical judgements. It also involved some very concrete factual statements, which science is highly qualified to either verify or falsify. In 1441 Lorenzo Valla – a Catholic priest and a pioneer linguist – published a scientific study proving that Constantine’s Donation was a forgery. Valla analysed the style and grammar of the document, and the various words and terms it contained. He demonstrated that the document included words that were unknown in fourth-century Latin, and that it was most probably forged about 400 years after Constantine’s death. Moreover, the date appearing on the document is ‘30 March, in the year Constantine was consul for the fourth time, and Gallicanus was consul for the first time’. In the Roman Empire, two consuls were elected each year, and it was customary to date documents by their consulate years. Unfortunately, Constantine’s fourth consulate was in 315, whereas Gallicanus was elected consul for the first time only in 317. If this all-important document was indeed composed in Constantine’s days, it would never have contained such a blatant error. It is as if Thomas Jefferson and his colleagues had dated the American Declaration of Independence 34 July 1776.
Today all historians agree that the Donation of Constantine was forged in the papal court sometime in the eighth century. Even though Valla never disputed the moral authority of ancient imperial decrees, his scientific analysis did undermine the practical guideline that Europeans ought to obey the Pope.3
On 20 December 2013 the Ugandan parliament passed the Anti-Homosexuality Act, which criminalised homosexual activities, penalising some activities by life imprisonment. It was inspired and supported by evangelical Christian groups, which maintain that God prohibits homosexuality. As proof they quote Leviticus 18:22 (‘Do not have sexual relations with a man as one does with a woman; that is detestable’) and Leviticus 20:13 (‘If a man has sexual relations with a man as one does with a woman, both of them have done what is detestable. They are to be put to death; their blood will be on their own heads’). In previous centuries the same religious story was responsible for tormenting millions of people all over the world. This story can be briefly summarised as follows:
Ethical judgement Humans ought to obey God’s commands. |
Factual statement About 3,000 years ago God commanded humans to avoid homosexual activities. |
Practical guideline People should avoid homosexual activities. |
Is the story true? Scientists cannot argue with the judgement that humans ought to obey God. Personally, you may dispute it. You may believe that human rights trump divine authority, and if God orders us to violate human rights, we shouldn’t listen to Him. Yet there is no scientific experiment that can decide this issue.
In contrast, science has a lot to say about the factual statement that 3,000 years ago the Creator of the Universe commanded members of the Homo sapiens species to abstain from boy-on-boy action. How do we know this statement is true? Examining the relevant literature reveals that though this statement is repeated in millions of books, articles and Internet sites, they all rely on a single source: the Bible. If so, a scientist would ask, who composed the Bible, and when? Note that this is a factual question, not a question of values. Devout Jews and Christians claim that at least the book of Leviticus was dictated by God to Moses on Mount Sinai, and from that moment onwards not a single letter was either added or deleted from it. ‘But,’ the scientist would insist, ‘how can we be sure of that? After all, the Pope argued that the Donation of Constantine was composed by Constantine himself in the fourth century, when in fact it was forged 400 years later by the Pope’s own clerks.’
We can now use an entire arsenal of scientific methods to determine who composed the Bible, and when. Scientists have been doing exactly that for more than a century, and if you are interested, you can read whole books about their findings. To cut a long story short, most peer-reviewed scientific studies agree that the Bible is a collection of numerous different texts composed by different human authors centuries after the events they purport to describe, and that these texts were not assembled into a single holy book until long after biblical times. For example, whereas King David probably lived around 1000 BC, it is commonly accepted that the book of Deuteronomy was composed in the court of King Josiah of Judah, sometime around 620 bc, as part of a propaganda campaign aimed at strengthening Josiah’s authority. Leviticus was compiled at an even later date, no earlier than 500 BC.
As for the idea that the ancient Jews carefully preserved the biblical text, without adding or subtracting anything, scientists point out that biblical Judaism was not a scripture-based religion at all. Rather, it was a typical Iron Age cult, similar to many of its Middle Eastern neighbours. It had no synagogues, yeshivas, rabbis – or even a bible. Instead, it had elaborate temple rituals, most of which involved sacrificing animals to a jealous sky god so that he would bless his people with seasonal rains and military victories. Its religious elite consisted of priestly families, who owed everything to birth and nothing to intellectual prowess. The mostly illiterate priests were busy with the temple ceremonies and had little time for writing or studying any scriptures.
During the Second Temple period a rival religious elite gradually formed. Due partly to Persian and Greek influences, Jewish scholars who wrote and interpreted texts gained increasing prominence. These scholars eventually came to be known as rabbis, and the texts they compiled were christened ‘the Bible’. Rabbinical authority rested on individual intellectual abilities rather than on birth. The clash between this new literate elite and the old priestly families was inevitable. Fortunately for the rabbis, the Romans torched Jerusalem and its temple in 70 AD while suppressing the Great Jewish Revolt. With the temple in ruins, the priestly families lost their religious authority, their economic power base and their very raison d’être. Traditional Judaism – a Judaism of temples, priests and head-splitting warriors – disappeared. In its place emerged a new Judaism of books, rabbis and hair-splitting scholars. The scholars’ main forte was interpretation. They used this ability not only to explain how an almighty God allowed His temple to be destroyed, but also to bridge the immense gaps between the old Judaism described in biblical stories and the very different Judaism they created.4
Hence according to our best scientific knowledge, the Leviticus injunctions against homosexuality reflect nothing grander than the biases of a few priests and scholars in ancient Jerusalem. Though science cannot decide whether people ought to obey God’s commands, it has many relevant things to say about the provenance of the Bible. If Ugandan politicians think that the power that created the cosmos, the galaxies and the black holes becomes terribly upset whenever two Homo sapiens males have a bit of fun together, then science can help disabuse them of this rather bizarre notion.
In truth, it is not always easy to separate ethical judgements from factual statements. Religions have the nagging tendency to turn factual statements into ethical judgements, thereby creating serious confusion and obfuscating what should have been relatively simple debates. Thus the factual statement ‘God wrote the Bible’ all too often mutates into the ethical injunction ‘you ought to believe that God wrote the Bible’. Merely believing in this factual statement becomes a virtue, whereas doubting it becomes a dreadful sin.
Conversely, ethical judgements often hide within them factual statements that proponents don’t bother to mention, because they think they have been proven beyond doubt. Thus the ethical judgement ‘human life is sacred’ (which science cannot test) may shroud the factual statement ‘every human has an eternal soul’ (which is open to scientific debate). Similarly, when American nationalists proclaim that ‘the American nation is sacred’, this seemingly ethical judgement is in fact predicated on factual statements such as ‘the USA has spearheaded most of the moral, scientific and economic advances of the last few centuries’. Whereas it is impossible to scientifically scrutinise the claim that the American nation is sacred, once we unpack this judgement we may well examine scientifically whether the USA has indeed been responsible for a disproportionate share of moral, scientific and economic breakthroughs.
This has led some philosophers, such as Sam Harris, to argue that science can always resolve ethical dilemmas, because human values always conceal within them some factual statements. Harris thinks that all humans share a single supreme value – minimising suffering and maximising happiness – and therefore all ethical debates are factual arguments concerning the most efficient way to maximise happiness.5 Islamic fundamentalists want to reach heaven in order to be happy, liberals believe that increasing human liberty maximises happiness, and German nationalists think that everyone would be better off if Berlin is allowed to run the planet. According to Harris, Islamists, liberals and nationalists have no ethical dispute; they have a factual disagreement about how best to realise their common goal.
Yet even if Harris is right, and even if all humans cherish happiness, in practice it would be extremely difficult to use this insight to decide ethical disputes, particularly because we have no scientific definition or measurement of happiness. Consider again the case of the Three Gorges Dam. Even if we agree that the ultimate aim of the project is to make the world a happier place, how can we tell whether generating cheap electricity contributes more to global happiness than protecting traditional lifestyles or saving the rare Chinese river dolphin? As long as we haven’t deciphered the mysteries of consciousness, we cannot develop a universal measurement for happiness and suffering, and we don’t know how to compare the happiness and suffering of different individuals, let alone different species. How many units of happiness are generated when a billion Chinese enjoy cheaper electricity? How many units of misery are produced when an entire dolphin species becomes extinct? Indeed, are happiness and misery mathematical entities that can be added or subtracted in the first place? Eating ice cream is enjoyable; finding true love is more enjoyable. Do you think that if you just eat enough ice cream, the accumulated pleasure could ever equal the rapture of true love?
Consequently, although science has much more to contribute to ethical debates than we commonly think, there is a line it cannot cross, at least not yet. Without the guiding hand of some religion, it is impossible to maintain large-scale social orders. Even universities and laboratories need religious backing. Religion provides the ethical justification for scientific research, and in exchange gets to influence the scientific agenda and the uses of scientific discoveries. Hence you cannot understand the history of science without taking religious beliefs into account. Scientists seldom dwell on this fact, but the Scientific Revolution itself began in one of the most dogmatic, intolerant and religious societies in history.
We often associate science with the values of secularism and tolerance. If so, early modern Europe is the last place you would have expected a scientific revolution. Europe in the days of Columbus, Copernicus and Newton had the highest concentration of religious fanatics in the world, and the lowest level of tolerance. The luminaries of the Scientific Revolution lived in a society that expelled Jews and Muslims, burned heretics wholesale, saw a witch in every cat-loving elderly lady and started a new religious war every full moon.
If you had travelled to Cairo or Istanbul around 1600, you would find there a multicultural and tolerant metropolis, where Sunnis, Shiites, Orthodox Christians, Catholics, Armenians, Copts, Jews and even the occasional Hindu lived side by side in relative harmony. Though they had their share of disagreements and riots, and though the Ottoman Empire routinely discriminated against people on religious grounds, it was a liberal paradise compared with Europe. If you had then sailed on to contemporary Paris or London, you would have found cities awash with religious extremism, in which only those belonging to the dominant sect could live. In London they killed Catholics, in Paris they killed Protestants, the Jews had long been driven out, and nobody in his right mind would dream of letting any Muslims in. And yet, the Scientific Revolution began in London and Paris rather than in Cairo and Istanbul.
It is customary to portray the history of modernity as a struggle between science and religion. In theory, both science and religion are interested above all in the truth, and because each upholds a different truth, they are doomed to clash. In fact, neither science nor religion cares that much about the truth, hence they can easily compromise, coexist and even cooperate.
Religion is interested above all in order. It aims to create and maintain the social structure. Science is interested above all in power. Through research, it aims to acquire the power to cure diseases, fight wars and produce food. As individuals, scientists and priests may give immense importance to the truth; but as collective institutions, science and religion prefer order and power over truth. They therefore make good bedfellows. The uncompromising quest for truth is a spiritual journey, which can seldom remain within the confines of either religious or scientific establishments.
It would accordingly be far more accurate to view modern history as the process of formulating a deal between science and one particular religion – namely, humanism. Modern society believes in humanist dogmas, and uses science not in order to question these dogmas, but rather in order to implement them. In the twenty-first century the humanist dogmas are unlikely to be replaced by pure scientific theories. However, the covenant linking science and humanism may well crumble and give way to a very different kind of deal, between science and some new post-humanist religion. We will dedicate the next two chapters to understanding the modern covenant between science and humanism. The third and final part of the book will then explain why this covenant is disintegrating, and what new deal might replace it.
Modernity is a deal. All of us sign up to this deal on the day we are born, and it regulates our lives until the day we die. Very few of us can ever rescind or transcend this deal. It shapes our food, our jobs and our dreams, and it decides where we dwell, whom we love and how we pass away.
At first sight modernity looks like an extremely complicated deal, hence few try to understand what they have signed up for. Like when you download some software and are asked to sign an accompanying contract that consists of dozens of pages of legalese—you take one look at it, immediately scroll down to the last page, tick ‘I agree’ and forget about it. Yet in fact modernity is a surprisingly simple deal. The entire contract can be summarised in a single phrase: humans agree to give up meaning in exchange for power.
Until modern times most cultures believed that humans played a part in some great cosmic plan. The plan was devised by the omnipotent gods, or by the eternal laws of nature, and humankind could not change it. The cosmic plan gave meaning to human life, but also restricted human power. Humans were much like actors on a stage. The script gave meaning to their every word, tear and gesture – but placed strict limits on their performance. Hamlet cannot murder Claudius in Act I, or leave Denmark and go to an ashram in India. Shakespeare won’t allow it. Similarly, humans cannot live for ever, they cannot escape all diseases, and they cannot do as they please. It’s not in the script.
In exchange for giving up power, premodern humans believed that their lives gained meaning. It really mattered whether they fought bravely on the battlefield, whether they supported the lawful king, whether they ate forbidden foods for breakfast or whether they had an affair with the next-door neighbour. This of course created some inconveniences, but it gave humans psychological protection against disasters. If something terrible happened – such as war, plague or drought – people consoled themselves that ‘We all play a role in some great cosmic drama devised by the gods, or by the laws of nature. We are not privy to the script, but we can rest assured that everything happens for a purpose. Even this terrible war, plague and drought have their place in the greater scheme of things. Furthermore, we can count on the playwright that the story surely has a good and meaningful ending. So even the war, plague and drought will work out for the best – if not here and now, then in the afterlife.’
Modern culture rejects this belief in a great cosmic plan. We are not actors in any larger-than-life drama. Life has no script, no playwright, no director, no producer – and no meaning. To the best of our scientific understanding, the universe is a blind and purposeless process, full of sound and fury but signifying nothing. During our infinitesimally brief stay on our tiny speck of a planet, we fret and strut this way and that, and then are heard of no more.
Since there is no script, and since humans fulfil no role in any great drama, terrible things might befall us and no power will come to save us or give meaning to our suffering. There won’t be a happy ending, or a bad ending, or any ending at all. Things just happen, one after the other. The modern world does not believe in purpose, only in cause. If modernity has a motto, it is ‘shit happens’.
On the other hand, if shit just happens, without any binding script or purpose, then humans too are not confined to any predetermined role. We can do anything we want – provided we can find a way. We are constrained by nothing except our own ignorance. Plagues and droughts have no cosmic meaning – but we can eradicate them. Wars are not a necessary evil on the way to a better future – but we can make peace. No paradise awaits us after death – but we can create paradise here on earth and live in it for ever, if we just manage to overcome some technical difficulties.
If we invest money in research, then scientific breakthroughs will accelerate technological progress. New technologies will fuel economic growth, and a growing economy will dedicate even more money to research. With each passing decade we will enjoy more food, faster vehicles and better medicines. One day our knowledge will be so vast and our technology so advanced that we shall distil the elixir of eternal youth, the elixir of true happiness, and any other drug we might possibly desire – and no god will stop us.
The modern deal thus offers humans an enormous temptation, coupled with a colossal threat. Omnipotence is in front of us, almost within our reach, but below us yawns the abyss of complete nothingness. On the practical level modern life consists of a constant pursuit of power within a universe devoid of meaning. Modern culture is the most powerful in history, and it is ceaselessly researching, inventing, discovering and growing. At the same time, it is plagued by more existential angst than any previous culture.
This chapter discusses the modern pursuit of power. The next chapter will examine how humankind has used its growing power to somehow sneak meaning back into the infinite emptiness of the cosmos. Yes, we moderns have promised to renounce meaning in exchange for power; but there’s nobody out there to hold us to our promise. We think we are smart enough to enjoy the full benefits of the modern deal, without having to pay its price.
The modern pursuit of power is fuelled by the alliance between scientific progress and economic growth. For most of history science progressed at a snail’s pace, while the economy was in deep freeze. The gradual increase in human population did lead to a corresponding increase in production, and sporadic discoveries sometimes resulted even in per capita growth, but this was a very slow process.
If in AD 1000 a hundred villagers produced a hundred tons of wheat, and in AD 1100, 105 villagers produced 107 tons of wheat, this nominal growth didn’t change the rhythms of life nor the socio-political order. Whereas today everyone is obsessed with growth, in the premodern era people were oblivious to it. Princes, priests and peasants assumed that human production was more or less stable, that one person could enrich himself only by pilfering from another and that their grandchildren were unlikely to enjoy a better standard of living.
This stagnation resulted to a large extent from the difficulties involved in financing new projects. Without adequate funding, it wasn’t easy to drain swamps, construct bridges and build ports – not to mention engineer new wheat strains, discover new energy sources or open new trade routes. Funds were scarce because there was little credit in those days; there was little credit because people had no belief in growth; and people didn’t believe in growth because the economy was stagnant. Stagnation thereby perpetuated itself.
Suppose you live in a medieval town that suffers from annual outbreaks of dysentery. You resolve to find a cure. You need funding to set up a workshop, buy medicinal herbs and exotic chemicals, pay assistants and travel to consult with famous doctors. You also need money to feed yourself and your family while you are busy with your research. But you don’t have much money. You can approach the local miller, baker and blacksmith and ask them to meet all your needs for a few years, promising that when you finally discover the cure and become wealthy, you will pay your debts.
Unfortunately, the miller, baker and blacksmith are unlikely to consent. They need to feed their families today, and they have no faith in miracle medicines. They weren’t born yesterday, and in all their years they have never heard of anyone finding a new medicine for some dreaded disease. If you want provisions – you have to pay cash. But how can you get the money when you haven’t discovered the medicine yet, and all your time is taken up with research? Reluctantly, you go back to tilling your field, dysentery keeps tormenting the townsfolk, nobody tries to develop new remedies, and not a single gold coin changes hands. That’s how the economy languished and science stood still.
The cycle was eventually broken in the modern age thanks to people’s growing trust in the future, and the resulting miracle of credit. Credit is the economic manifestation of trust. Nowadays, if I want to develop a new drug but I don’t have enough money, I can get a loan from a bank, or turn to private investors and venture capital funds. When Ebola erupted in West Africa in the summer of 2014, what do you think happened to the shares of pharmaceutical companies that were busy developing anti-Ebola drugs and vaccines? They skyrocketed. Tekmira shares rose by 50 per cent and BioCryst shares by 90 per cent. In the Middle Ages, the outbreak of a plague caused people to raise their eyes towards heaven, and pray to God to forgive them for their sins. Today when people hear of some deadly new epidemic, they reach for their mobile phones and call their brokers. For the stock exchange, even an epidemic is a business opportunity.
If enough new ventures succeed, people’s trust in the future increases, credit expands, interest rates fall, entrepreneurs can raise money more easily and the economy grows. People consequently have even greater trust in the future, the economy keeps growing and science progresses with it.
It sounds simple on paper. Why then did humankind have to wait until the modern era for economic growth to gather momentum? For thousands of years people had little faith in future growth not because they were stupid, but because it contradicts our gut feelings, our evolutionary heritage and the way the world works. Most natural systems exist in equilibrium, and most survival struggles are a zero-sum game in which one can prosper only at the expense of another.
For example, each year roughly the same amount of grass grows in a given valley. The grass supports a population of 10,000 rabbits or so, which contains enough slow, dim-witted or unlucky rabbits to provide prey for a hundred foxes. If one fox is particularly clever and diligent, and devours more rabbits than average, then other foxes will likely starve. If all foxes somehow manage to catch more rabbits simultaneously, the rabbit population will crash, and next year even more foxes will starve. Even though there are occasional fluctuations in the rabbit market, in the long run the foxes cannot expect to hunt, say, 3 per cent more rabbits per year than the preceding year.
Of course, some ecological realities are more complex, and not all survival struggles are zero-sum games. Many animals cooperate effectively, and a few even give loans. The most famous lenders in nature are vampire bats. These bats congregate in the thousands inside caves, and every night fly out to look for prey. When they find a sleeping bird or careless mammal, they make a small incision in its skin, and suck its blood. But not all vampire bats find a victim every night. In order to cope with the uncertainty of their life, the vampires loan blood to each other. A vampire that fails to find prey will come home and ask a more fortunate friend to regurgitate some stolen blood. Vampires remember very well to whom they loaned blood, so at a later date if the friend returns home hungry, he will approach his debtor, who will reciprocate the favour.
However, unlike human bankers, vampires never charge interest. If vampire A loaned vampire B ten centilitres of blood, B will repay the same amount. Nor do vampires use loans in order to finance new businesses or encourage growth in the blood-sucking market. Because the blood is produced by other animals, the vampires have no way of increasing production. Though the blood market has its ups and downs, vampires cannot presume that in 2017 there will be 3 per cent more blood than in 2016, and that in 2018 the blood market will again grow by 3 per cent. Consequently, vampires don’t believe in growth.1 For millions of years of evolution humans lived under conditions similar to those of vampires, foxes and rabbits. Hence humans too find it difficult to believe in growth.
Evolutionary pressures have accustomed humans to see the world as a static pie. If somebody gets a larger slice of the pie, somebody else inevitably gets a smaller slice. A particular family or city may prosper, but humankind as a whole is not going to produce more than it produces today. Accordingly, traditional religions such as Christianity and Islam sought ways to solve humanity’s problems with the help of current resources, either by redistributing the existing pie, or by promising a pie in the sky.
Modernity, in contrast, is based on the firm belief that economic growth is not only possible, but absolutely essential. Prayers, good deeds and meditation might be comforting and inspiring, but problems such as famine, plague and war can only be solved through growth. This fundamental dogma can be summarised in one simple idea: ‘If you have a problem, you probably need more stuff, and in order to have more stuff, you must produce more of it.’
Modern politicians and economists insist that growth is vital for three principal reasons. Firstly, when we produce more, we can consume more, raise our standard of living and allegedly enjoy a happier life. Secondly, as long as humankind multiplies, economic growth is needed merely to stay where we are. For example, in India the annual population growth rate is 1.2 per cent. That means that unless the Indian economy expands each year by at least 1.2 per cent, unemployment will rise, salaries will fall and the average standard of living will decline. Thirdly, even if Indians stop multiplying, and even if the Indian middle class can be satisfied with its current standard of living, what should India do about its hundreds of millions of poverty-stricken citizens? If the economy doesn’t grow, and the pie therefore remains the same size, you can give more to the poor only by taking something from the rich. That will force you to make some very hard choices, and will probably cause a lot of resentment and even violence. If you wish to avoid hard choices, resentment and violence, you need a bigger pie.
Modernity has turned ‘more stuff’ into a panacea applicable to almost all public and private problems, from religious fundamentalism through Third World authoritarianism down to a failed marriage. If only countries such as Pakistan and Egypt could maintain a healthy growth rate, their citizens would come to enjoy the benefits of private cars and bulging refrigerators, and would take the path of earthly prosperity instead of following the fundamentalist pied piper. Similarly, economic growth in countries such as Congo and Myanmar would produce a prosperous middle class which is the bedrock of liberal democracy. And in the case of the disgruntled couple, their marriage would allegedly be saved if only they would buy a bigger house (so they don’t have to share a cramped office), purchase a dishwasher (so they stop arguing whose turn it is to do the dishes) and attend expensive therapy sessions twice a week.
Economic growth has thus become the crucial juncture where almost all modern religions, ideologies and movements meet. The Soviet Union, with its megalomaniacal Five Year Plans, was as obsessed with growth as the most cut-throat American robber baron. Just as Christians and Muslims all believe in heaven, and disagree only about how to get there, so during the Cold War both capitalists and communists believed in creating heaven on earth through economic growth, and wrangled only about the exact method.
Today Hindu revivalists, pious Muslims, Japanese nationalists and Chinese communists may declare their adherence to very different values and goals, but they have all come to believe that economic growth is the key to realising their disparate goals. Thus in 2014 the devout Hindu Narendra Modi was elected prime minister of India thanks largely to his success in boosting economic growth in his home state of Gujarat, and to the widely held view that only he could reinvigorate the sluggish national economy. Analogous views have kept the Islamist Recep Tayyip Erdoğan in power in Turkey since 2003. The name of his party – the Justice and Development Party – highlights its commitment to economic development, and the Erdoğan government has indeed managed to maintain impressive growth rates for more than a decade.
Japan’s prime minister, the nationalist Shinzō Abe, came to office in 2012 pledging to jolt the Japanese economy out of two decades of stagnation. His aggressive and somewhat unusual measures to achieve this have been nicknamed Abenomics. Meanwhile in neighbouring China the Communist Party still pays lip service to traditional Marxist–Leninist ideals, but in practice is guided by Deng Xiaoping’s famous maxims that ‘development is the only hard truth’ and that ‘it doesn’t matter if a cat is black or white, so long as it catches mice’. Which means, in plain language: do whatever it takes to promote economic growth, even if Marx and Lenin wouldn’t have been happy with it.
In Singapore, as befits that no-nonsense city-state, they pursue this line of thinking even further, and peg ministerial salaries to the national GDP. When the Singaporean economy grows, government ministers get a raise, as if that is what their jobs are all about.2
This obsession with growth might appear self-evident, but only because we live in the modern world. It wasn’t like this in the past. Indian maharajas, Ottoman sultans, Kamakura shoguns and Han emperors rarely staked their political fortunes on ensuring economic growth. That Modi, Erdoğan, Abe and Chinese president Xi Jinping all bet their careers on economic growth testifies to the almost religious status growth has managed to acquire throughout the world. Indeed, it may not be wrong to call the belief in economic growth a religion, because it now purports to solve many, if not most, of our ethical dilemmas. Since economic growth is allegedly the source of all good things, it encourages people to bury their ethical disagreements and adopt whichever course of action maximises long-term growth. Thus Modi’s India is home to thousands of sects, parties, movements and gurus, yet though their ultimate aims may differ, they all have to pass through the same bottleneck of economic growth, so why not pull together in the meantime?
The credo of ‘more stuff’ accordingly urges individuals, firms and governments to disregard anything that might hamper economic growth, such as preserving social equality, ensuring ecological harmony or honouring one’s parents. In the Soviet Union, the leadership thought that state-controlled communism was the fastest way to grow, hence anything that stood in the way of collectivisation was bulldozed, including millions of kulaks, the freedom of expression and the Aral Sea. Nowadays it is generally accepted that some version of free-market capitalism is a much more efficient way of ensuring long-term growth, hence greedy tycoons, rich farmers and freedom of expression are protected, but ecological habitats, social structures and traditional values that stand in the way of free-market capitalism are dismantled and destroyed.
Take, for example, a software engineer earning $100 per hour working for some hi-tech start-up. One day her elderly father has a stroke. He now needs help with shopping, cooking and even bathing. She could move her father to her own house, leave for work later in the morning, come back earlier in the evening and take care of her father personally. Both her income and the start-up’s productivity would suffer, but her father would enjoy the care of a respectful and loving daughter. Alternatively, the engineer could hire a Mexican carer who, for $12 per hour, would live with the father and provide for all his needs. That would mean business as usual for the engineer and her start-up, and even the carer and the Mexican economy would benefit. What should the engineer do?
Free-market capitalism has a firm answer. If economic growth demands that we loosen family bonds, encourage people to live away from their parents, and import carers from the other side of the world – so be it. This answer, however, involves an ethical judgement rather than a factual statement. When some people specialise in software engineering while others devote their time to care of the elderly, we can no doubt produce more software and give old people more professional care. Yet is economic growth more important than family bonds? By presuming to make such ethical judgements, free-market capitalism has crossed the border from the land of science into that of religion.
Most capitalists would probably dislike the label of religion, but as religions go, capitalism can at least hold its head high. Unlike other religions that promise us pie in the sky, capitalism promises miracles here on earth – and sometimes even delivers. Much of the credit for overcoming famine and plague belongs to the ardent capitalist faith in growth. Capitalism even deserves some kudos for reducing human violence and increasing tolerance and cooperation. As the next chapter explains, there are additional factors at play here, but capitalism did make an important contribution to global harmony by encouraging people to stop viewing the economy as a zero-sum game, in which your profit is my loss, and instead see it as a win–win situation, in which your profit is also my profit. This mutual-benefit approach has probably helped global harmony far more than centuries of Christian preaching about loving your neighbour and turning the other cheek.
From its belief in the supreme value of growth, capitalism deduces its number one commandment: thou shalt invest thy profits in increasing growth. For most of history princes and priests wasted their profits on flamboyant carnivals, sumptuous palaces and unnecessary wars. Alternatively, they placed their gold coins in iron chests, sealed them and buried them in a dungeon. Today, devout capitalists use their profits to hire new employees, enlarge the factory or develop a new product.
If they don’t know how to do it themselves, they give their money to somebody who does, such as a banker or a venture capitalist. The latter lend the money to various entrepreneurs. Farmers take loans to plant new wheat fields, contractors build new houses, energy corporations explore new oil fields, and arms factories develop new weapons. The profits from all these activities enable the entrepreneurs to repay the loans with interest. We now have not only more wheat, houses, oil and weapons – but also more money, which the banks and funds can again lend. This wheel will never stop rotating, at least not according to capitalism. We will never reach a moment when capitalism says: ‘That’s it. Enough growth. We can now take it easy.’ If you want to know why the capitalist wheel is unlikely ever to stop, talk for an hour with a friend who has just accumulated $100,000 and is wondering what to do with it.
‘The banks offer such low interest rates,’ he would complain. ‘I don’t want to put my money in a savings account that pays barely 0.5 per cent a year. Maybe I can make 2 per cent in government bonds. My cousin Richie bought a flat in Seattle last year and has already made 20 per cent on his investment! Maybe I should go into real estate too; but everybody is saying there’s a new real-estate bubble. So, what do you think about the stock exchange? A friend told me the best deal these days is to buy an ETF that follows emerging economies, like Brazil or China.’ As he pauses for a moment to breathe, you ask, ‘Well, why not just be satisfied with your $100,000?’ He will explain to you better than I can why capitalism will never stop.
This lesson is hammered home even to children and teenagers through ubiquitous capitalist games. Premodern games such as chess assumed a stagnant economy. You begin a game of chess with sixteen pieces, and you never finish a game with more. In rare cases a pawn may be transformed into a queen, but you cannot produce new pawns nor upgrade your knights into tanks. So chess players never have to think about investment. In contrast, many modern board games and computer games focus on investment and growth.
Particularly telling are civilisation-style strategy games, such as Minecraft, The Settlers of Catan or Sid Meier’s Civilization. The game may be set in the Middle Ages, the Stone Age or some imaginary fairy land, but the principles always remain the same – and are always capitalist. Your aim is to establish a city, a kingdom or maybe an entire civilisation. You begin from a very modest base, perhaps just a village and its nearby fields. Your assets provide you with an initial income of wheat, wood, iron or gold. You then have to invest this income wisely. You have to choose between unproductive but still necessary tools such as soldiers, and productive assets such as more villages, fields and mines. The winning strategy is usually to invest the barest minimum in non-productive essentials, while maximising your productive assets. Establishing additional villages means that next turn you will have a larger income that would enable you not only to buy more soldiers (if necessary), but simultaneously to increase your investment in production. Soon you could upgrade your villages to towns, build universities, harbours and factories, explore the seas and oceans, establish your civilisation and win the game.
Yet can the economy actually keep growing for ever? Won’t it eventually run out of resources – and grind to a halt? In order to ensure perpetual growth, we must somehow discover an inexhaustible store of resources.
One solution is to explore and conquer new lands. For centuries, the growth of the European economy and the expansion of the capitalist system indeed relied heavily on overseas imperial conquests. However, there are only so many islands and continents on earth. Some entrepreneurs do hope eventually to explore and conquer new planets and even galaxies, but in the meantime, the modern economy has had to find a better method of expanding.
It is science that has provided modernity with the solution. The fox economy cannot grow, because foxes don’t know how to produce more rabbits. The rabbit economy stagnates, because rabbits cannot make the grass grow faster. But the human economy can grow because humans can discover new materials and sources of energy.
The traditional view of the world as a pie of a fixed size presupposes that there are only two kinds of resources in the world: raw materials and energy. But in truth there are three kinds of resources: raw materials, energy and knowledge. Raw materials and energy are exhaustible – the more you use, the less you have. Knowledge, in contrast, is a growing resource – the more you use, the more you have. Indeed, when you increase your stock of knowledge, it can give you more raw materials and energy as well. If I invest $100 million searching for oil in Alaska and I find it, then I now have more oil, but my grandchildren will have less of it. In contrast, if I invest $100 million researching solar energy, and I find a new and more efficient way of harnessing it, then both I and my grandchildren will have more energy.
For thousands of years the scientific road to growth was blocked because people believed that holy scriptures and ancient traditions already contained all the important knowledge the world had to offer. A corporation that believed all the oil fields in the world had already been discovered would not waste time and money searching for oil. Similarly, a human culture that believed it already knew everything worth knowing would not bother searching for new knowledge. This was the position of most premodern human civilisations. However, the Scientific Revolution freed humankind from this naïve conviction. The greatest scientific discovery was the discovery of ignorance. Once humans realised how little they knew about the world, they suddenly had a very good reason to seek new knowledge, which opened up the scientific road to progress.
With each passing generation science helped discover fresh sources of energy, new kinds of raw material, better machinery and novel production methods. Consequently, in 2016 humankind commands far more energy and raw materials than ever before, and production skyrockets. Inventions such as the steam engine, the internal combustion engine and the computer have created whole new industries from scratch. As we look twenty years into the future, we confidently expect to produce and consume far more in 2036 than we do today. We trust nanotechnology, genetic engineering and artificial intelligence to revolutionise production yet again, and to open whole new sections in our ever-expanding supermarkets.
We therefore have a good chance of overcoming the problem of resource scarcity. The real nemesis of the modern economy is ecological collapse. Both scientific progress and economic growth take place within a brittle biosphere, and as they gather steam, so the shock waves destabilise the ecology. In order to provide every person in the world with the same standard of living as affluent Americans, we would need a few more planets – but we have only this one. If progress and growth do end up destroying the ecosystem, the cost will be dear not merely to vampires, foxes and rabbits, but also to Sapiens. An ecological meltdown will cause economic ruin, political turmoil, a fall in human standards of living, and might threaten the very existence of human civilisation.
We could lessen the danger by slowing the pace of progress and growth. If this year investors expect to get a 6 per cent return on their portfolios, in ten years they could learn to be satisfied with a 3 per cent return, in twenty years with only 1 per cent, and in thirty years the economy will stop growing and we’ll be happy with what we already have. Yet the creed of growth firmly objects to such a heretical idea. Instead, it suggests we should run even faster. If our discoveries destabilise the ecosystem and threaten humanity, then we should discover something to protect ourselves. If the ozone layer dwindles and exposes us to skin cancer, we should invent better sunscreen and better cancer treatments, thereby also promoting the growth of new sunscreen factories and cancer centres. If all the new industries pollute the atmosphere and the oceans, causing global warming and mass extinctions, then we should build for ourselves virtual worlds and hi-tech sanctuaries that will provide us with all the good things in life even if the planet becomes as hot, dreary and polluted as hell.
Beijing has already become so polluted that people avoid the outdoors, and wealthy Chinese pay thousands of dollars for indoor air-purifying systems. The super-rich build protective contraptions even over their yards. In 2013 the International School of Beijing, which caters to the children of foreign diplomats and upper-class Chinese, went a step further, and constructed a gigantic $5 million dome over its six tennis courts and its playing fields. Other schools are following suit, and the Chinese air-purification market is booming. Of course most Beijing residents cannot afford such luxuries in their homes, nor can they afford to send their kids to the International School.3
Humankind finds itself locked into a double race. On the one hand, we feel compelled to speed up the pace of scientific progress and economic growth. A billion Chinese and a billion Indians want to live like middle-class Americans, and they see no reason why they should put their dreams on hold when the Americans are unwilling to give up their SUVs and shopping malls. On the other hand, we must stay at least one step ahead of ecological Armageddon. Managing this double race becomes more difficult by the year, because every stride that brings the Delhi slum-dwellers closer to the American Dream also brings the planet closer to the brink.
The good news is that for hundreds of years humankind has enjoyed a growing economy without falling prey to ecological meltdown. Many other species have perished in the process, and humans too have faced a number of economic crises and ecological disasters, but so far we have always managed to pull through. Yet future success is not guaranteed by any law of nature. Who knows if science will always be able to simultaneously save the economy from freezing and the ecology from boiling. And since the pace just keeps accelerating, the margins for error keep narrowing. If previously it was sufficient to invent something amazing once a century, today we need to come up with a miracle every two years.
We should also be concerned that an ecological apocalypse might have different consequences for different human castes. There is no justice in history. When disaster strikes, the poor almost always suffer far more than the rich, even if the rich caused the tragedy in the first place. Global warming is already affecting the lives of poor people in arid African countries more than the lives of affluent Westerners. Paradoxically, the very power of science may increase the danger, because it makes the rich complacent.
Consider greenhouse gas emissions. Most scholars and an increasing number of politicians recognise the reality of global warming and the magnitude of the danger. Yet this recognition has so far failed to change our actual behavior in any signficant way. We talk a lot about global warming, but in practice humankind is unwilling to make the serious economic, social or political sacrifices necessary to stop this catastrophe. Between 2000 and 2010 emissions didn’t decrease at all. On the contrary, they increased at an annual rate of 2.2 per cent, compared with an annual increase rate of 1.3 per cent between 1970 and 2000.4 The 1997 Kyoto protocol on the reduction of greenhouse gas emissions aimed merely to retard global warming rather than stop it, yet the world’s number one polluter – the United States – refused to ratify it, and has made no attempt to significantly reduce its emissions, for fear of impeding its economic growth.5
26. All the talk about global warming, and all the conferences, summits and protocols, have so far failed to curb global greenhouse gas emissions. If you look closely at the graph you see that emissions go down only during periods of economic crises and stagnation. Thus the small downturn in greenhouse emissions in 2008–9 was due not to the signing of the Copenhagen Accord, but to the global financial crisis. The only sure way to stop global warming is to stop economic growth, which no government is willing to do.
26. Source: Emission Database for Global Atmospheric Research (EDGAR), European Commission.
In December 2015 more ambitious targets were set in the Paris Agreement, which calls for limiting average temperature increase to 1.5 degrees above pre-industrial levels. But many of the painful steps necessary to reach this goal have conveniently been postponed to after 2030, or even to the second half of the twenty-first century, effectively passing the hot potato to the next generation. Current administrations are able to reap immediate political benefits from looking green, while the heavy political price of reducing emissions (and slowing growth) is bequeathed to future administrations. Even so, at the time of writing (January 2016) it is far from certain that the USA and other leading polluters will ratify the Paris Agreement. Too many politicians and voters believe that as long as the economy grows, scientists and engineers could always save us from doomsday. When it comes to climate change, many growth true-believers don’t just hope for miracles – they take it for granted that the miracles will happen.
How rational is it to risk the future of humankind on the assumption that future scientists will make some unknown planet-saving discoveries? Most of the presidents, ministers and CEOs who run the world are very rational people. Why are they willing to make such a gamble? Maybe because they don’t think they are gambling on their own personal future. Even if bad comes to worse and science cannot hold off the deluge, engineers could still build a hi-tech Noah’s Ark for the upper caste, while leaving billions of others to drown. The belief in this hi-tech Ark is currently one of the biggest threats to the future of humankind and of the entire ecosystem. People who believe in the hi-tech Ark should not be put in charge of the global ecology, for the same reason that people who believe in a heavenly afterlife should not be given nuclear weapons.
And what about the poor? Why aren’t they protesting? If and when the deluge comes, they will bear the full cost of it. However, they will also be the first to bear the cost of economic stagnation. In a capitalist world the lives of the poor improve only when the economy grows. Hence they are unlikely to support any steps to reduce future ecological threats that are based on slowing down present-day economic growth. Protecting the environment is a very nice idea, but those who cannot pay their rent are worried about their overdraft far more than about melting ice caps.
Even if we continue running fast enough and manage to fend off both economic collapse and ecological meltdown, the race itself creates huge problems. For the individual it results in high levels of stress and tension. After centuries of economic growth and scientific progress, life should have become calm and peaceful, at least in the most advanced countries. If our ancestors knew what tools and resources stand ready at our command, they would have surmised that we must be enjoying celestial tranquillity, free of all cares and worries. The truth is very different. Despite all our achievements, we feel a constant pressure to do and produce even more.
We blame ourselves, our boss, the mortgage, the government, the school system. But it’s not really their fault. It’s the modern deal that we all signed up for on the day we were born. In the premodern world, people were akin to lowly clerks in a socialist bureaucracy. They punched their cards, and then waited for somebody else to do something. In the modern world, we humans run the business, so we are under constant pressure day and night.
On the collective level, the race manifests itself in ceaseless upheavals. Whereas social and political systems previously endured for centuries, today every generation destroys the old world and builds a new one in its place. As the Communist Manifesto brilliantly put it, the modern world positively requires uncertainty and disturbance. All fixed relations and ancient prejudices are swept away, and new structures become antiquated before they can ossify. All that is solid melts into air. It isn’t easy to live in such a chaotic world, and it is even harder to govern it.
Hence modernity needs to work hard to ensure that neither human individuals nor the human collective will try to retire from the race, despite all the tension and chaos it creates. For that purpose modernity upholds growth as a supreme value for whose sake we should make every sacrifice and risk every danger. On the collective level, governments, firms and organisations are encouraged to measure their success in terms of growth, and to fear equilibrium as if it were the Devil. On the individual level, we are inspired to constantly increase our incomes and our standards of living. Even if you are quite satisfied with your current conditions, you should strive for more. Yesterday’s luxuries become today’s necessities. If once you could live well in a three-bedroom apartment with one car and a single desktop computer, today you need a five-bedroom house with two cars and a host of iPods, tablets and smartphones.
It wasn’t very hard to convince individuals to want more. Greed comes easily to humans. The big problem was to convince collective institutions such as states and churches to go along with the new ideal. For millennia, societies strove to curb individual desires and bring them into some kind of balance. It was well known that people wanted more and more for themselves, but when the pie was of a fixed size, social harmony depended on restraint. Avarice was bad. Modernity turned the world upside down. It convinced human collectives that equilibrium is far more frightening than chaos, and because avarice fuels growth, it is a force for good. Modernity accordingly inspired people to want more, and dismantled the age-old disciplines that curbed greed.
The resulting anxieties were assuaged to a large extent by free-market capitalism, which is one reason why this particular ideology has become so popular. Capitalist thinkers repeatedly calm us: ‘Don’t worry, it will be okay. Provided the economy grows, the invisible hand of the market will take care of everything else.’ Capitalism has thus sanctified a voracious and chaotic system that grows by leaps and bounds, without anyone understanding what is happening and whither we are rushing. (Communism, which also believed in growth, thought it could prevent chaos and orchestrate growth through state planning. But after initial successes, it eventually fell far behind the messy free-market cavalcade.)
Bashing free-market capitalism is high on the intellectual agenda nowadays. Since capitalism dominates our world, we should indeed make every effort to understand its shortcomings before they cause apocalyptic catastrophes. Yet criticising capitalism should not blind us to its advantages and attainments. So far, it’s been an amazing success – at least if you ignore the potential for future ecological meltdown, and if you measure success by the yardstick of production and growth. In 2016 we may be living in a stressful and chaotic world, but the doomsday prophecies of collapse and violence have not materialised, whereas the scandalous promises of perpetual growth and global cooperation are fulfilled. Although we experience occasional economic crises and international wars, in the long run capitalism has not only managed to prevail, but also to overcome famine, plague and war. For thousands of years priests, rabbis and muftis explained that humans cannot overcome famine, plague and war by their own efforts. Then along came the bankers, investors and industrialists, and within 200 years managed to do exactly that.
So the modern deal promised us unprecedented power – and the promise has been kept. Now what about the price? In exchange for power, the modern deal expects us to give up meaning. How did humans handle this chilling demand? Complying with it could easily have resulted in a dark world, devoid of ethics, aesthetics and compassion. Yet the fact remains that humankind is today not only far more powerful than ever, it is also far more peaceful and cooperative. How did humans manage that? How did morality, beauty and even compassion survive and flourish in a world devoid of gods, of heaven and of hell?
Capitalists are, again, quick to give all the credit to the invisible hand of the market. Yet the market’s hand is not only invisible, it is also blind, and by itself could never have saved human society. Indeed, not even a country fair can maintain itself without the helping hand of some god, king or church. If everything is for sale, including the courts and the police, trust evaporates, credit vanishes and business withers.6 What, then, rescued modern society from collapse? Humankind was salvaged not by the law of supply and demand, but rather by the rise of a revolutionary new religion – humanism.
The modern deal offers us power, on condition that we renounce our belief in a great cosmic plan that gives meaning to life. Yet when you examine the deal closely, you find a cunning escape clause. If humans somehow manage to find meaning without predicating it upon some great cosmic plan, this is not considered a breach of contract.
This escape clause has been the salvation of modern society, for it is impossible to sustain order without meaning. The great political, artistic and religious project of modernity has been to find a meaning to life that is not rooted in some great cosmic plan. We are not actors in a divine drama, and nobody cares about us and our deeds, so nobody sets limits to our power – but we are still convinced our lives have meaning.
As of 2016, humankind has indeed managed to have it both ways. Not only do we possess far more power than ever before, but, against all expectations, God’s death did not lead to social collapse. Throughout history prophets and philosophers have argued that if humans stopped believing in a great cosmic plan, all law and order would vanish. Yet today, those who pose the greatest threat to global law and order are precisely those people who continue to believe in God and His all-encompassing plans. God-fearing Syria is a far more violent place than the secular Netherlands.
If there is no cosmic plan, and we are not committed to any divine or natural laws, what prevents social collapse? How come you can travel for thousands of miles, from Amsterdam to Bucharest or from New Orleans to Montreal, without being kidnapped by slave-traders, ambushed by outlaws or killed by feuding tribes?
The antidote to a meaningless and lawless existence was provided by humanism, a revolutionary new creed that conquered the world during the last few centuries. The humanist religion worships humanity, and expects humanity to play the part that God played in Christianity and Islam, and that the laws of nature played in Buddhism and Daoism. Whereas traditionally the great cosmic plan gave meaning to the life of humans, humanism reverses the roles and expects the experiences of humans to give meaning to the cosmos. According to humanism, humans must draw from within their inner experiences not only the meaning of their own lives, but also the meaning of the entire universe. This is the primary commandment humanism has given us: create meaning for a meaningless world.
Accordingly, the central religious revolution of modernity was not losing faith in God; rather, it was gaining faith in humanity. It took centuries of hard work. Thinkers wrote pamphlets, artists composed poems and symphonies, politicians struck deals – and together they convinced humanity that it can imbue the universe with meaning. To grasp the depth and implications of the humanist revolution, consider how modern European culture differs from medieval European culture. In 1300 people in London, Paris and Toledo did not believe that humans could determine by themselves what is good and what is evil, what is right and what is wrong, what is beautiful and what is ugly. Only God could create and define goodness, righteousness and beauty.
Although it was widely accepted that humans enjoy unique abilities and opportunities, they were also seen as ignorant and corruptible beings. Without external supervision and guidance, humans could never understand the eternal truth, but would instead be drawn to fleeting sensual pleasures and worldly delusions. In addition, medieval thinkers pointed out that humans are mortal, and their opinions and feelings are as fickle as the wind. Today I love something with all my heart, tomorrow I am disgusted by it, and next week I am dead and buried. Hence any meaning that depends on human opinion is necessarily fragile and ephemeral. Absolute truths, and the meaning of life and of the universe, must therefore be based on some eternal law emanating from a superhuman source.
This view made God the supreme source not only of meaning but also of authority. Meaning and authority always go hand in hand. Whoever determines the meaning of our actions – whether they are good or evil, right or wrong, beautiful or ugly – also gains the authority to tell us what to think and how to behave.
God’s role as the source of meaning and authority was not just a philosophical theory. It affected every facet of daily life. Suppose that in 1300, in some small English town, a married woman took a fancy to the next-door neighbour and had sex with him. As she sneaked back home, hiding a smile and straightening her dress, her mind began to race: ‘What was that all about? Why did I do it? Was it good or bad? What does it imply about me? Should I do it again?’ In order to answer such questions, the woman was supposed to go to the local priest, confess and ask the holy father for guidance. The priest was well versed in scriptures, and these sacred texts revealed to him exactly what God thought about adultery. Based on the eternal word of God, the priest could determine beyond all doubt that the woman had committed a mortal sin, and that if she didn’t make amends she’d end up in hell. She ought therefore to repent immediately, donate ten gold coins to the coming crusade, avoid eating meat for the next six months and make a pilgrimage to the tomb of St Thomas à Becket at Canterbury. And it goes without saying that she must never repeat her dreadful sin.
Today things are very different. For centuries humanism has been convincing us that we are the ultimate source of meaning, and that our free will is therefore the highest authority of all. Instead of waiting for some external entity to tell us what’s what, we can rely on our own feelings and desires. From infancy we are bombarded with a barrage of humanist slogans counselling us: ‘Listen to yourself, be true to yourself, trust yourself, follow your heart, do what feels good.’ Jean-Jacques Rousseau summed it all up in his novel Émile, the eighteenth-century bible of feeling. Rousseau held that, when looking for life’s rules of conduct, he found them ‘in the depths of my heart, traced by nature in characters which nothing can efface. I need only consult myself with regard to what I wish to do; what I feel to be good is good, what I feel to be bad is bad.’1
Accordingly, when a modern woman wants to understand the meaning of an affair she is having, she is far less prone to blindly accept the judgements of a priest or an ancient book. Instead, she will carefully examine her feelings. If her feelings aren’t very clear, she will call a good friend, meet for coffee and pour her heart out. If things are still vague, she will go to her therapist, and tell him all about it. Theoretically, the modern therapist occupies the same place as the medieval priest, and it is an overworked cliché to compare the two professions. Yet in practice, a huge chasm separates them. The therapist does not possess a holy book that defines good and evil. When the woman finishes her story, it is highly unlikely that the therapist will burst out: ‘You wicked woman! You have committed a terrible sin!’ It is equally unlikely that he will say, ‘Wonderful! Good for you!’ Instead, no matter what the woman may have done and said, the therapist is most likely to ask in a caring voice, ‘Well, how do you feel about what happened?’
True, the therapist’s bookshelf sags under the weight of Freud’s and Jung’s and the 1,000-page-long Diagnostic and Statistical Manual of Mental Disorders (DSM). Yet these are not holy scriptures. The DSM diagnoses the ailments of life, not the meaning of life. Most psychologists believe that only human feelings are authorised to determine the true meaning of human actions. Hence no matter what the therapist thinks about his patient’s affair, and no matter what Freud, Jung and the DSM think about affairs in general, the therapist should not force his views on the patient. Instead, he should help her examine the most secret chambers of her heart. There and only there will she find the answers. Whereas medieval priests had a hotline to God and could distinguish for us between good and evil, modern therapists merely help us get in touch with our own inner feelings.
This partly explains the changing fortunes of the institution of marriage. In the Middle Ages, marriage was considered a sacrament ordained by God, and God also authorised a father to marry off his children according to his wishes and interests. An extramarital affair was consequently a brazen rebellion against both divine and parental authority. It was a mortal sin, no matter what the lovers felt and thought about it. Today people marry for love, and it is their personal feelings that give value to this bond. Hence, if the very same feelings that once drove you into the arms of one man now drive you into the arms of another, what’s wrong with that? If an extramarital affair provides an outlet for emotional and sexual desires that are not satisfied by your spouse of twenty years, and if your new lover is kind, passionate and sensitive to your needs – why not enjoy it?
But wait a minute, you might say. We cannot ignore the feelings of the other concerned parties. The woman and her lover might feel wonderful in each other’s arms, but if their respective spouses find out, everybody will probably feel awful for quite some time. And if it leads to divorce, their children might carry the emotional scars for decades. Even if the affair is never discovered, concealing it involves a lot of tension and may lead to growing feelings of alienation and resentment.
The most interesting discussions in humanist ethics concern situations like extramarital affairs, when human feelings collide. What happens when the same action causes one person to feel good, and another to feel bad? How do we weigh these feelings against each other? Do the good feelings of the two lovers outweigh the bad feelings of their spouses and children?
It doesn’t matter what you think about this particular question. It is far more important to understand the kind of arguments both sides employ. Modern people have differing ideas about extramarital affairs, but no matter what their stance is, they tend to justify it in the name of human feelings rather than in the name of holy scriptures and divine commandments. Humanism has taught us that something can be bad only if it causes somebody to feel bad. Murder is wrong not because some god once said, ‘Thou shalt not kill.’ Rather, murder is wrong because it causes terrible suffering to the victim, to his family members, and to his friends and acquaintances. Theft is wrong not because some ancient text says, ‘Thou shalt not steal.’ Rather, theft is wrong because when you lose your property, you feel bad about it. And if an action does not cause anyone to feel bad, there can be nothing wrong with it. If the same ancient text says that God commanded us not to make any images of either humans or animals (Exodus 20:4), but I enjoy sculpting such figures, and I don’t harm anyone in the process – then what could possibly be wrong with it?
The same logic dominates current debates on homosexuality. If two adult men enjoy having sex with one another, and they don’t harm anyone while doing so, why should it be wrong, and why should we outlaw it? It is a private matter between these two men, and they are free to decide about it according to their own personal feelings. If in the Middle Ages two men confessed to a priest that they were in love with one another, and that they had never felt so happy, their good feelings would not have changed the priest’s damning judgement – indeed, their lack of guilt would only have worsened the situation. Today, in contrast, if two men are in love, they are told: ‘If it feels good – do it! Don’t let any priest mess with your mind. Just follow your heart. You know best what’s good for you.’
Interestingly enough, today even religious zealots adopt this humanistic discourse when they want to influence public opinion. For example, every year for the past decade the Israeli LGBT community has held a gay pride parade in the streets of Jerusalem. It’s a unique day of harmony in this conflict-riven city, because it is the one occasion when religious Jews, Muslims and Christians suddenly find a common cause – they all fume in accord against the gay parade. What’s really interesting, though, is the argument they use. They don’t say, ‘These sinners shouldn’t hold a gay parade because God forbids homosexuality.’ Rather, they explain to every available microphone and TV camera that ‘seeing a gay parade passing through the holy city of Jerusalem hurts our feelings. Just as gay people want us to respect their feelings, they should respect ours.’
On 7 January 2015 Muslim fanatics massacred several staff members of the French magazine Charlie Hebdo, because the magazine published caricatures of the prophet Muhammad. In the following days, many Muslim organisations condemned the attack, yet some could not resist adding a ‘but’ clause. For example, the Egyptian Journalists Syndicate denounced the terrorists for their use of violence, but in the same breath denounced the magazine for ‘hurting the feelings of millions of Muslims across the world’.2 Note that the Syndicate did not blame the magazine for disobeying God’s will. That’s what we call progress.
Our feelings provide meaning not only for our private lives, but also for social and political processes. When we want to know who should rule the country, what foreign policy to adopt and what economic steps to take, we don’t look for the answers in scriptures. Nor do we obey the commands of the Pope or the Council of Nobel Laureates. Rather, in most countries, we hold democratic elections and ask people what they think about the matter at hand. We believe that the voter knows best, and that the free choices of individual humans are the ultimate political authority.
Yet how does the voter know what to choose? Theoretically at least, the voter is supposed to consult his or her innermost feelings, and follow their lead. It’s not always easy. In order to get in touch with my feelings, I need to filter out the empty propaganda slogans, the endless lies of ruthless politicians, the distracting noise created by cunning spin doctors, and the learned opinions of hired pundits. I need to ignore all this racket and attend only to my authentic inner voice. And then my authentic inner voice whispers in my ear ‘Vote Cameron’ or ‘Vote Modi’ or ‘Vote Clinton’ or whomever, and I put a cross against that name on the ballot paper – and that’s how we know who should rule the country.
In the Middle Ages this would have been considered the height of foolishness. The fleeting feelings of ignorant commoners were hardly a sound basis for important political decisions. When England was torn apart by the Wars of the Roses, nobody thought to end the conflict by having a national referendum, in which every bumpkin and wench would cast a vote for either Lancaster or York. Similarly, when Pope Urban II launched the First Crusade, he didn’t claim it was the people’s will. It was God’s will. Political authority came down from heaven – it didn’t rise up from the hearts and minds of mortal humans.
27. The Holy Spirit, in the guise of a dove, delivers an ampulla full of sacred oil for the baptism of King Clovis, founder of the Frankish kingdom (illustration from the Grandes Chroniques de France, c.1380). According to the founding myth of France, this ampulla was henceforth kept in Rheims Cathedral, and all subsequent French kings were anointed with the divine oil at their coronation. Each coronation thus involved a miracle, as the empty ampulla spontaneously refilled with oil. This indicated that God himself chose the king and gave him His blessing. If God had not wished Louis IX or Louis XIV or Louis XVI to be king, the ampulla would not have been refilled.
27. © Bibliothèque nationale de France, RC-A-02764, Grandes Chroniques de France de Charles V, folio 12v.
What’s true of ethics and politics is also true of aesthetics. In the Middle Ages art was governed by objective yardsticks. The standards of beauty did not reflect human fads. Rather, human tastes were supposed to conform to superhuman dictates. This made perfect sense in a period when people believed that art was inspired by superhuman forces rather than by human feelings. The hands of painters, poets, composers and architects were supposedly moved by muses, angels and the Holy Spirit. Many a time when a composer penned a beautiful hymn, no credit was given to the composer, for the same reason it was not given to the pen. The pen was held and directed by human fingers, which in turn were held and directed by the hand of God.
Medieval scholars clung to a classical Greek theory, according to which the movements of the stars across the sky create heavenly music that permeates the entire universe. Humans enjoy physical and mental health when the inner movements of their body and soul are in harmony with the heavenly music created by the stars. Human music should therefore echo the divine melody of the cosmos, rather than reflect the ideas and caprices of flesh-and-blood composers. The most beautiful hymns, songs and tunes were usually attributed not to the genius of some human artist but to divine inspiration.
28. Pope Gregory the Great composes the eponymous Gregorian chants. The Holy Spirit, in its favourite dove outfit, sits on his right shoulder, whispering the chants in his ear. The Holy Spirit is the chants’ true author, whereas Gregory is just a conduit. God is the ultimate source of art and beauty.
28. Manuscript: Registrum Gregorii, c.983 © Archiv Gerstenberg/ullstein bild via Getty Images.
Such views are no longer in vogue. Today humanists believe that the only source for artistic creation and aesthetic value is human feelings. Music is created and judged by our inner voice, which need follow neither the rhythms of the stars nor the commands of muses and angels. For the stars are mute, while muses and angels exist only in our own imagination. Modern artists seek to get in touch with themselves and their feelings, rather than with God. No wonder then that when we come to evaluate art, we no longer believe in any objective yardsticks. Instead, we again turn to our subjective feelings. In ethics, the humanist motto is ‘if it feels good – do it’. In politics, humanism instructs us that ‘the voter knows best’. In aesthetics, humanism says that ‘beauty is in the eye of the beholder’.
The very definition of art is consequently up for grabs. In 1917 Marcel Duchamp purchased an ordinary mass-produced urinal, declared it a work of art, named it Fountain, signed it and placed it in a Paris museum. Medieval people would not have bothered even to argue about it. Why waste oxygen on such utter nonsense? Yet in the modern humanist world, Duchamp’s work is considered an important artistic milestone. In countless classrooms across the world, first-year art students are shown an image of Duchamp’s Fountain, and at a sign from the teacher, all hell breaks loose. It is art! No it isn’t! Yes it is! No way! After letting the students release some steam, the teacher focuses the discussion by asking ‘What exactly is art? And how do we determine whether something is a work of art or not?’ After a few more minutes of back and forth the teacher steers the class in the right direction: ‘Art is anything people think is art, and beauty is in the eye of the beholder.’ If people think that a urinal is a beautiful work of art – then it is. What higher authority is there to tell people they are wrong? Today, copies of Duchamp’s masterpiece are presented in some of the most important museums in the world, including the San Francisco Museum of Modern Art, the National Gallery of Canada, the Tate Gallery in London and the Pompidou Centre in Paris. (The copies are displayed in the museums’ galleries, not in the lavatories.)
Such humanist approaches have had a deep impact on the economic field as well. In the Middle Ages guilds controlled the production process, leaving little room for the initiative or taste of individual artisans and customers. The carpenters’ guild determined what was a proper chair, the bakers’ guild defined good bread, and the Meistersinger guild decided which songs were first class and which were rubbish. Meanwhile princes and city councils regulated salaries and prices, occasionally forcing people to buy fixed amounts of goods at a non-negotiable price. In the modern free market, all these guilds, councils and princes have been superseded by a new supreme authority – the free will of the customer.
Suppose Toyota decides to produce the perfect car. It sets up a committee of experts from various fields: it hires the best engineers and designers, brings together the finest physicists and economists, and even consults with several sociologists and psychologists. To be on the safe side, they throw in a Nobel laureate or two, an Oscar-winning actress and some world-famous artists. After five years of research and development, they unveil the perfect car. Millions of vehicles are produced, and shipped to car dealerships across the world. Yet nobody buys the car. Does it mean that the customers are making a mistake, and that they don’t know what’s good for them? No. In a free market the customer is always right. If customers don’t want it, it means that the car is no good. It doesn’t matter if all the university professors and all the priests and mullahs cry out from every lectern and pulpit that this is a wonderful car – if the customers reject it, it’s a bad car. Nobody has the authority to tell customers that they are wrong, and heaven forbid that a government would try to force its citizens to buy a particular car against their will.
What’s true of cars is true of all other products. Listen, for example, to Professor Leif Andersson from the University of Uppsala. He specialises in the genetic enhancement of farm animals, in order to create faster-growing pigs, cows that produce more milk, and chickens with extra meat on their bones. In an interview with the newspaper Haaretz, reporter Naomi Darom confronted Andersson with the fact that such genetic manipulations might cause much suffering to the animals. Already today ‘enhanced’ dairy cows have such heavy udders that they can barely walk, while ‘upgraded’ chickens cannot even stand up. Professor Andersson had a firm answer: ‘Everything comes back to the individual customer and to the question of how much the customer is willing to pay for meat . . . we must remember that it would be impossible to maintain current levels of global meat consumption without the [enhanced] modern chicken . . . if customers ask us only for the cheapest meat possible – that’s what the customers will get . . . Customers need to decide what is most important to them – price, or something else.’3
Professor Andersson can go to sleep at night with a clean conscience. The fact that customers are buying his enhanced animal products implies that he is meeting their needs and desires and is therefore doing good. By the same logic, if some multinational corporation wants to know whether it lives up to its ‘Don’t be evil’ motto, it need only take a look at its bottom line. If it makes loads of money, it means that millions of people like its products, which implies that it is a force for good. If someone objects and says that people might make the wrong choice, he will be quickly reminded that the customer is always right, and that human feelings are the source of all meaning and authority. If millions of people freely choose to buy the company’s products, who are you to tell them that they are wrong?
Finally, the rise of humanist ideas has revolutionised education systems too. In the Middle Ages the source of all meaning and authority was external, hence education focused on instilling obedience, memorising scriptures and studying ancient traditions. Teachers presented pupils with a question, and the pupils had to remember how Aristotle, King Solomon or St Thomas Aquinas answered it.
29. Humanist Politics: the voter knows best.
29. © Sadik Gulec/Shutterstock.com.
30. Humanist Economics: the customer is always right.
30. © CAMERIQUE/ClassicStock/Corbis.
31. Humanist Aesthetics: Beauty is in the eye of the beholder. (Marcel Duchamp’s Fountain in a special exhibition of modern art at the National Gallery of Scotland.)
31. © Jeff J Mitchell/Getty Images.
32. Humanist Ethics: if it feels good – do it!
32. © Molly Landreth/Getty Images.
33. Humanist Education: think for yourself!
33. The Thinker, 1880–81 (bronze), Rodin, Auguste, Burrell Collection, Glasgow © Culture and Sport Glasgow (Museums)/Bridgeman Images.
In contrast, modern humanist education believes in teaching students to think for themselves. It is good to know what Aristotle, Solomon and Aquinas thought about politics, art and economics; yet since the supreme source of meaning and authority lies within ourselves, it is far more important to know what you think about these matters. Ask a teacher – whether in kindergarten, school or college – what she is trying to teach. ‘Well,’ she will answer, ‘I teach the kids history, or quantum physics, or art – but above all I try to teach them to think for themselves.’ It may not always succeed, but that is what humanist education seeks to do.
As the source of meaning and authority relocated from the sky to human feelings, the nature of the entire cosmos changed. The exterior universe – hitherto teeming with gods, muses, fairies and ghouls – became empty space. The interior world – hitherto an insignificant enclave of crude passions – became deep and rich beyond measure. Angels and demons were transformed from real entities roaming the forests and deserts of the world into inner forces within our own psyche. Heaven and hell too ceased to be real places somewhere above the clouds and below the volcanoes, and were instead interpreted as internal mental states. You experience hell every time you ignite the fires of anger and hatred within your heart; and you enjoy heavenly bliss every time you forgive your enemies, repent your own misdeeds and share your wealth with the poor.
When Nietzsche declared that God is dead, this is what he meant. At least in the West, God has become an abstract idea that some accept and others reject, but it makes little difference either way. In the Middle Ages, without a god I had no source of political, moral and aesthetic authority. I could not tell what was right, good or beautiful. Who could live like that? Today, in contrast, it is very easy not to believe in God, because I pay no price for my unbelief. I can be a complete atheist and still derive a very rich mixture of political, moral and aesthetical values from my inner experience.
If I believe in God at all, it is my choice to believe. If my inner self tells me to believe in God – then I believe. I believe because I feel God’s presence, and my heart tells me He is there. But if I no longer feel God’s presence, and if my heart suddenly tells me that there is no God – I will cease believing. Either way, the real source of authority is my own feelings. So even while saying that I believe in God, the truth is that I have a much stronger belief in my own inner voice.
Like every other source of authority, feelings have their shortcomings. Humanism assumes that each human has a single authentic inner self, but when I try to attend to it I often encounter either silence or a cacophony of contending voices. To overcome this problem, humanism has proclaimed not only a new source of authority, but also a new method for accessing that authority and gaining true knowledge.
In medieval Europe, the chief formula for knowledge was: Knowledge = Scriptures × Logic.* If people wanted to know the answer to an important question, they would read scriptures and use their logic to understand the exact meaning of the text. For example, scholars who wished to determine the shape of the earth scanned the Bible looking for relevant references. One pointed out that in Job 38:13, it says that God can ‘take hold of the edges of the earth, and the wicked be shaken out of it’. This implies – reasoned the pundit – that because the earth has ‘edges’ that God can ‘take hold of’, it must be a flat square. Another sage rejected this interpretation, calling attention to Isaiah 40:22, where it says that God ‘sits enthroned above the circle of the earth’. Isn’t that proof that the earth is round? In practice, this meant that scholars sought knowledge by spending years in schools and libraries, reading more and more texts, and sharpening their logic so they could understand the texts correctly.
The Scientific Revolution proposed a very different formula for knowledge: Knowledge = Empirical Data × Mathematics. If we want to know the answer to some question, we need to gather relevant empirical data, and then use mathematical tools to analyse them. For example, in order to gauge the true shape of the earth, we can begin by observing the sun, moon and planets from various locations across the world. Once we have amassed enough observations, we can use trigonometry to deduce not only the shape of the earth, but also the structure of the entire solar system. In practice, this means that scientists seek knowledge by spending years in observatories, laboratories and on research expeditions, gathering more and more empirical data, and sharpening their mathematical tools so they can interpret the data correctly.
The scientific formula for knowledge led to astounding breakthroughs in astronomy, physics, medicine and multiple other disciplines. But it has had one huge drawback: it could not deal with questions of value and meaning. Medieval pundits could determine with absolute certainty that it is wrong to murder and steal, and that the purpose of human life is to do God’s bidding, because scriptures said so. Scientists cannot deliver such ethical judgements. No amount of data and no mathematical wizardry can prove that it is wrong to murder. Yet human societies cannot survive without such value judgements.
One way to overcome this difficulty was to continue using the old medieval formula alongside the new scientific method. When faced with a practical problem – such as determining the shape of the earth, building a bridge or curing a disease – we collect empirical data and analyse them mathematically. When faced with an ethical problem – such as determining whether to allow divorce, abortion and homosexuality – we read scriptures. This solution was adopted to some extent by numerous modern societies, from Victorian Britain to twenty-first-century Iran.
However, humanism offered an alternative. As humans gained confidence in themselves, a new formula for acquiring ethical knowledge appeared: Knowledge = Experiences × Sensitivity. If we wish to know the answer to any ethical question, we need to connect to our inner experiences, and observe them with the utmost sensitivity. In practice, this means that we seek knowledge by spending years collecting experiences, and sharpening our sensitivity so we can understand these experiences correctly.
What exactly are ‘experiences’? They are not empirical data. An experience is not made of atoms, electromagnetic waves, proteins or numbers. Rather, an experience is a subjective phenomenon made up of three main ingredients: sensations, emotions and thoughts. At any particular moment my experience comprises everything I sense (heat, pleasure, tension, etc.), every emotion I feel (love, fear, anger, etc.) and whatever thoughts arise in my mind.
And what is ‘sensitivity’? It means two things. Firstly, paying attention to my sensations, emotions and thoughts. Secondly, allowing these sensations, emotions and thoughts to influence me. Granted, I shouldn’t allow every passing breeze to sweep me away. Yet I should be open to new experiences and permit them to change my views, my behaviour and even my personality.
Experiences and sensitivity build up one another in a never-ending cycle. I cannot experience anything if I have no sensitivity, and I cannot develop sensitivity unless I undergo a variety of experiences. Sensitivity is not an abstract aptitude that can be developed by reading books or listening to lectures. It is a practical skill that can ripen and mature only by applying it in practice.
Take tea, for example. I start by drinking very sweet ordinary tea while reading the morning paper. The tea is little more than an excuse for a sugar rush. One day I realise that between the sugar and the newspaper, I hardly taste the tea at all. So I reduce the amount of sugar, put the paper aside, close my eyes and focus on the tea itself. I begin to register its unique aroma and flavour. Soon I find myself experimenting with different teas, black and green, comparing their exquisite tangs and delicate bouquets. Within a few months, I drop the supermarket labels and buy my tea at Harrods. I develop a particular liking for ‘Panda Dung tea’ from the mountains of Ya’an in Sichuan province, made from the leaves of tea bushes fertilised by the dung of panda bears. That’s how, one cup at a time, I hone my tea sensitivity and become a tea connoisseur. If in my early tea-drinking days you had served me Panda Dung tea in a Ming Dynasty porcelain goblet, I would not have appreciated it any more than builder’s tea in a paper cup. You cannot experience something if you don’t have the necessary sensitivity, and you cannot develop your sensitivity except by undergoing a long string of experiences.
What’s true of tea is true of all other aesthetic and ethical knowledge. We aren’t born with a ready-made conscience. As we pass through life we hurt people and people hurt us, we act compassionately and others show compassion to us. If we pay attention, our moral sensitivity sharpens, and these experiences become a source of valuable ethical knowledge about what is good, what is right and who I really am.
Humanism thus sees life as a gradual process of inner change, leading from ignorance to enlightenment by means of experiences. The highest aim of humanist life is to fully develop your knowledge through a wide variety of intellectual, emotional and physical experiences. In the early nineteenth century Wilhelm von Humboldt – one of the chief architects of the modern education system – said that the aim of existence is ‘a distillation of the widest possible experience of life into wisdom’. He also wrote that ‘there is only one summit in life – to have taken the measure in feeling of everything human’.4 This could well be the humanist motto.
According to Chinese philosophy, the world is sustained by the interplay of opposing but complementary forces called yin and yang. This may not be true of the physical world, but it is certainly true of the modern world that has been created by the covenant of science and humanism. Every scientific yang contains within it a humanist yin, and vice versa. The yang provides us with power, while the yin provides us with meaning and ethical judgements. The yang and yin of modernity are reason and emotion, the laboratory and the museum, the production line and the supermarket. People often see only the yang and imagine that the modern world is dry, scientific, logical and utilitarian – just like a laboratory or a factory. But the modern world is also an extravagant supermarket. No culture in history has ever given such importance to human feelings, desires and experiences. The humanist view of life as a succession of experiences has become the founding myth of numerous modern industries, from tourism to art. Travel agents and restaurant chefs do not sell us flight tickets, hotels or fancy dinners – they sell us novel experiences.
Similarly, whereas most premodern narratives focused on external events and actions, modern novels, films and poems often emphasize feelings. Graeco-Roman epics and medieval chivalric romances were catalogues of heroic deeds, not feelings. One chapter described how a brave knight fought a monstrous ogre, and killed him. Another chapter recounted how the knight rescued a beautiful princess from a fire-spitting dragon, and killed him. A third chapter narrated how a wicked sorcerer kidnapped the princess, but the knight pursued the sorcerer, and killed him. Small wonder that the hero was invariably a knight, rather than a carpenter or a peasant, for peasants performed no heroic deeds.
Crucially, the heroes did not undergo any significant process of inner change. Achilles, Arthur, Roland and Lancelot were fearless warriors with a chivalric world view before they set out on their adventures, and they remained fearless warriors with the same world view at the end. All the ogres they killed and all the princesses they rescued confirmed their courage and perseverance, but ultimately taught them little.
The humanist focus on feelings and experiences, rather than deeds, transformed art. Wordsworth, Dostoevsky, Dickens and Zola cared little for brave knights and derring-do; instead they described how ordinary labourers and housewives felt. Some people believe that Joyce’s Ulysses represents the apogee of this modern focus on the inner life rather than external actions. In 260,000 words Joyce describes a single day in the life of the Dubliners Stephen Dedalus and Leopold Bloom, who over the course of that day do . . . well, nothing much at all.
Few people have actually read Ulysses cover to cover, but the same change of focus now underpins much of our popular culture too. In the United States, the TV series Survivor is often credited (or blamed) for turning reality shows into a craze. Survivor was the first reality show to make it to the top of the Nielsen ratings, and in 2007 Time magazine listed it among the hundred greatest TV shows of all time.5 In each season twenty contenders in minimal swimsuits are isolated on some tropical island. They have to face various kinds of challenges, and during each episode they vote to oust one of their number. The last one left takes home $1 million.
Audiences in Homeric Greece, in the Roman Empire or in medieval Europe would have found the idea familiar and highly attractive. Twenty challengers go in – only one hero comes out. ‘Wonderful!’ a Homeric prince, a Roman patrician or a crusader knight would have thought to himself as he sat down to watch. ‘Surely we are about to see amazing adventures, life-and-death battles and incomparable acts of heroism and betrayal. The warriors will probably stab each other in the back, or spill their entrails for all to see.’
What a disappointment! The back-stabbing and entrails-spilling remain a mere metaphor. Each episode lasts about an hour. Out of that, fifteen minutes are taken up by commercials for toothpaste, shampoo and cereals. Five minutes are dedicated to incredibly childish challenges, such as who can throw the most coconuts into a hoop, or who can eat the largest number of bugs in one minute. The rest of the time the ‘heroes’ just talk about their feelings! He said she said, and I felt this and I felt that. If a crusader knight had actually been able to sit down to watch Survivor, he would probably have grabbed his battleaxe and smashed the TV out of boredom and frustration.
Today we might think of medieval knights as insensitive brutes. If they lived among us, we would send them to a therapist, who might help them get in touch with their feelings. This is what happens to the Tin Man in The Wizard of Oz. He walks along the yellow brick road with Dorothy and her friends, hoping that when they reach Oz, the great wizard will give him a heart. Likewise, the Scarecrow wants a brain and the Lion wants courage. At the end of their journey they discover that the great wizard is a charlatan, and he can’t give them any of these things. But they discover something far more important: everything they wished for was already within themselves. There was never any need of some godlike wizard in order to become sensitive, wise or brave. You just need to follow the yellow brick road and open yourself up to whatever experiences come your way.
Exactly the same lesson is learned by Captain Kirk and Captain Jean-Luc Picard as they travel the galaxy in the starship Enterprise, by Huckleberry Finn and Jim as they sail down the Mississippi River, by Wyatt and Billy as they ride their Harley-Davidsons in Easy Rider, and by countless other characters in myriad other road movies who leave their home towns in Pennsylvania (or perhaps New South Wales), travel in an old convertible (or perhaps a bus), pass through various life-changing experiences, get in touch with themselves, talk about their feelings, and eventually reach San Francisco (or perhaps Alice Springs) as better and wiser individuals.
The formula Knowledge = Experiences × Sensitivity has changed not just our popular culture, but even our perception of weighty issues like war. Throughout most of history, when people wished to know whether a particular war was just, they asked God, they asked scriptures, and they asked kings, noblemen and priests. Few cared about the opinions and experiences of a common soldier or an ordinary civilian. War narratives such as those of Homer, Virgil and Shakespeare focused on the actions of emperors, generals and outstanding heroes, and though they did not hide the misery of war, this was more than compensated for by a full menu of glory and heroism. Ordinary soldiers appeared as either piles of bodies slaughtered by some Goliath, or a cheering crowd hoisting a triumphant David upon its shoulders.
34. Jean-Jacques Walter, Gustav Adolph of Sweden at the Battle of Breitenfeld (1631).
34. © DeAgostini Picture Library/Scala, Florence.
Look, for example, at the painting above of the Battle of Breitenfeld, which took place on 17 September 1631. The painter, Jean-Jacques Walter, glorifies King Gustav Adolph of Sweden, who led his army that day to a decisive victory. Gustav Adolph towers over the battlefield as if he were some god of war. One gets the impression that the king controls the battle like a chess player moving pawns. The pawns themselves are mostly generic figures, or tiny dots in the background. Walter was not interested in how they felt as they charged, fled, killed or died. They are a faceless collective.
Even when painters focused on the battle itself rather than on the commander, they still looked at it from above, and were far more concerned with collective manoeuvres than with personal feelings. Take, for example, Pieter Snayers’s painting of the Battle of White Mountain in November 1620.
The painting depicts a celebrated Catholic victory in the Thirty Years War over heretical Protestant rebels. Snayers wished to commemorate this victory by painstakingly recording the various formations, manoeuvres and troop movements. You can easily identify the different units, their armaments and their positions within the order of battle. Snayers gave far less importance to the experiences and feelings of the common soldiers. Like Jean-Jacques Walter, he makes us observe the battle from the Olympian vantage point of gods and kings, and gives us the impression that war is a giant chess game.
35. Pieter Snayers, The Battle of White Mountain.
35. © Bpk/Bayerische Staatsgemäldesammlungen.
If you take a closer look – for which you might need a magnifying glass – you realise that The Battle of White Mountain is a bit more complex than a chess game. What at first sight seem to be geometrical abstractions turn upon closer inspection into bloody scenes of carnage. Here and there you can even spot the faces of individual soldiers running or fleeing, firing their guns or impaling an enemy on their pikes. However, these scenes receive their meaning from their place within the overall picture. When we see a cannonball smashing a soldier to bits, we understand it as part of the great Catholic victory. If the soldier is fighting on the Protestant side, his death is a just reward for rebellion and heresy. If the soldier is fighting in the Catholic army, his death is a noble sacrifice for a worthy cause. If we look up, we can see angels hovering high above the battlefield. They are holding a white banner that explains in Latin what happened in this battle, and why it was so important. The message is that God helped Emperor Ferdinand II defeat his enemies on 8 November 1620.
For thousands of years, when people looked at war, they saw gods, emperors, generals and great heroes. But over the last two centuries, the kings and generals have been increasingly pushed to the side, and the limelight has shifted onto the common soldier and his experiences. War novels such as All Quiet on the Western Front and war films such as Platoon begin with a green recruit who knows little about himself and the world, but carries a heavy burden of hopes and illusions. He believes that war is glorious, the cause is just and the general is a genius. A few weeks of real war – of mud, and blood, and the smell of death – shatter his illusions one after another. If he survives, the formerly naïve recruit will leave the war a much wiser man, who no longer believes the clichés and ideals peddled by teachers, film-makers and eloquent politicians.
Paradoxically, this narrative has become so influential that today it is told over and over again even by teachers, film-makers and eloquent politicians. ‘War is not what you see in the movies!’ warn Hollywood blockbusters such as Apocalypse Now, Full Metal Jacket and Blackhawk Down. Enshrined in celluloid, prose or poetry, the feelings of the ordinary grunt have become the ultimate authority on war, which everyone has learned to respect. As the joke goes, ‘How many Vietnam vets does it take to change a light bulb?’ ‘You wouldn’t know, you weren’t there.’6
Painters too have lost interest in generals on horseback and tactical manoeuvres. Instead, they strive to depict how the common soldier feels. Look again at The Battle of Breitenfeld and The Battle of White Mountain. Now look at the following two pictures, both considered masterpieces of twentieth-century war art: The War (Der Krieg) by Otto Dix, and That 2,000 Yard Stare by Tom Lea.
Dix served as a sergeant in the German army during the First World War. Lea covered the 1944 Battle of Peleliu Island for Life magazine. Whereas Walter and Snayers saw war as a military and political phenomenon and wanted us to know what happened in particular battles, Dix and Lea saw war as an emotional phenomenon and wanted us to know how it feels. They didn’t care about the genius of generals or the tactical details of this or that battle. Dix’s soldier might have been in Verdun or Ypres or the Somme – it doesn’t matter which, because war is hell everywhere. Lea’s soldier just happened to be an American GI on Peleliu, but you could have seen exactly the same 2,000-yard stare on the face of a Japanese soldier on Iwo Jima, a German soldier in Stalingrad or a British soldier at Dunkirk.
36. Otto Dix, The War (1929–32).
36. Staatliche Kunstsammlungen, Neue Meister, Dresden, Germany © Lessing Images.
37. Tom Lea, That 2,000 Yard Stare (1944).
37. Tom Lea, That 2,000 Yard Stare, 1944. Oil on canvas, 36“x28”. LIFE Collection of Art WWII, U.S. Army Center of Military History, Ft. Belvoir, Virginia. © Courtesy of the Tom Lea Institute, El Paso, Texas.
In the paintings of Dix and Lea, the meaning of war does not emanate from tactical movements or divine proclamations. If you want to understand war, don’t look up at the general on the hilltop, or at angels in the sky. Instead, look straight into the eyes of the common soldiers. In Lea’s painting the gaping eyes of a traumatised soldier open a window onto the terrible truth of war. In Dix’s painting, the truth is so unbearable that it must be partly concealed behind a gas mask. No angels fly above the battlefield – only a rotting corpse, dangling from a ruined rafter and pointing an accusing finger.
Artists such as Dix and Lea thus helped overturn the traditional hierarchy of war. Numerous wars in earlier times were certainly as horrific as those of the twentieth century. However, hitherto even atrocious experiences were placed within a wider context that gave them a positive meaning. War might be hell, but it was also the gateway to heaven. A Catholic soldier fighting at the Battle of White Mountain could say to himself: ‘True, I am suffering. But the Pope and the emperor say that we are fighting for a good cause, so my suffering is meaningful.’ Otto Dix employed an opposite kind of logic. He saw personal experience as the source of all meaning, hence his line of thinking said: ‘I am suffering – and this is bad – hence the whole war is bad. If the kaiser and the clergy nevertheless support this war, they must be mistaken.’7
So far we have discussed humanism as if it were a single coherent world view. In fact, humanism shared the fate of every successful religion, such as Christianity and Buddhism. As it spread and evolved, it fragmented into several conflicting sects. All humanist sects believe that human experience is the supreme source of authority and meaning, yet they interpret human experience in different ways.
Humanism split into three main branches. The orthodox branch holds that each human being is a unique individual possessing a distinctive inner voice and a never-to-be-repeated series of experiences. Every human being is a singular ray of light that illuminates the world from a different perspective, and that adds colour, depth and meaning to the universe. Hence we ought to give as much freedom as possible to every individual to experience the world, follow his or her inner voice and express his or her inner truth. Whether in politics, economics or art, individual free will should have far more weight than state interests or religious doctrines. The more liberty individuals enjoy, the more beautiful, rich and meaningful is the world. Due to this emphasis on liberty, the orthodox branch of humanism is known as ‘liberal humanism’ or simply as ‘liberalism’.*
It is liberal politics that believes the voter knows best. Liberal art holds that beauty is in the eye of the beholder. Liberal economics maintains that the customer is always right. Liberal ethics advises us that if it feels good, we should go ahead and do it. Liberal education teaches us to think for ourselves, because we will find all the answers within.
During the nineteenth and twentieth centuries, as humanism gained increasing social credibility and political power, it sprouted two very different offshoots: socialist humanism, which encompassed a plethora of socialist and communist movements, and evolutionary humanism, whose most famous advocates were the Nazis. Both offshoots agreed with liberalism that human experience is the ultimate source of meaning and authority. Neither believed in any transcendental power or divine law book. If, for example, you had asked Karl Marx what was wrong with ten-year-olds working twelve-hour shifts in smoky factories, he would have answered that it makes the kids feel bad. We should avoid exploitation, oppression and inequality not because God said so, but because they make people miserable.
However, both socialists and evolutionary humanists pointed out that the liberal understanding of the human experience is flawed. Liberals think the human experience is an individual phenomenon. But there are many individuals in the world, and they often feel different things and have contradictory desires. If all authority and meaning flow from individual experiences, how do you settle contradictions between different such experiences?
On 17 July 2015 the German chancellor Angela Merkel was confronted by a teenage Palestinian refugee girl from Lebanon, whose family was seeking asylum in Germany but faced imminent deportation. The girl, Reem, told Merkel in fluent German that ‘It’s really very hard to watch how other people can enjoy life and you yourself can’t. I don’t know what my future will bring.’ Merkel replied that ‘politics can be tough’ and explained that there are hundreds of thousands of Palestinian refugees in Lebanon, and Germany cannot absorb them all. Stunned by this no-nonsense reply, Reem burst into tears. Merkel proceeded to stroke the desperate girl on the back, but stuck to her guns.
In the ensuing public storm many accused Merkel of cold-hearted insensitivity. To assuage criticism Merkel changed tack, and Reem and her family were given asylum. In the following months Merkel opened the door even wider, welcoming hundreds of thousands of refugees to Germany. But you can’t please everybody. Soon she was under severe attack for succumbing to sentimentalism and for not taking a sufficiently firm stand. Numerous German parents feared that Merkel’s U-turn meant that their children would have a lower standard of living, and perhaps suffer from a tidal wave of Islamisation. Why should they risk their families’ peace and prosperity to help complete strangers who might not even believe in the values of liberalism? Everyone feels very strongly about this matter. How to settle the contradictions between the feelings of the desperate refugees and of the anxious Germans?8
Liberals forever agonise about such contradictions. The best efforts of Locke, Jefferson, Mill and their colleagues have failed to provide us with a fast and easy solution to such conundrums. Holding democratic elections won’t help, because then the question would be who gets to vote in these elections – only German citizens, or also the millions of Asians and Africans who want to immigrate to Germany? Why privilege the feelings of one group over another? Likewise, you cannot resolve the Arab–Israeli conflict by having 8 million Israeli citizens and the 350 million citizens of Arab League nations vote on it. For obvious reasons the Israelis wouldn’t feel committed to the outcome of such a plebiscite.
People feel bound by democratic elections only when they share a basic bond with most other voters. If the experience of other voters is alien to me, and if I believe they don’t understand my feelings and don’t care about my vital interests, then even if I am outvoted by a hundred to one I have absolutely no reason to accept the verdict. Democratic elections usually work only within populations that have some prior common bond, such as shared religious beliefs or national myths. They are a method to settle disagreements among people who already agree on the basics.
Accordingly, in many cases liberalism has fused with age-old collective identities and tribal feelings to form modern nationalism. Today many associate nationalism with anti-liberal forces, but at least during the nineteenth century nationalism was closely aligned with liberalism. Liberals celebrate the unique experiences of individual humans. Each human has distinctive feelings, tastes and quirks that he or she should be free to express and explore as long as they don’t hurt anyone else. Similarly, nineteenth-century nationalists such as Giuseppe Mazzini celebrated the uniqueness of individual nations. They emphasised that many human experiences are communal. You cannot dance the polka by yourself, and you cannot invent and preserve the German language by yourself. Using word, dance, food and drink, each nation fosters different experiences in its members, and develops its own peculiar sensitivities.
Liberal nationalists like Mazzini sought to protect these distinctive national experiences from being oppressed and obliterated by intolerant empires, and envisaged a peaceful community of nations, each free to express and explore its communal feelings without hurting its neighbours. This remains the official ideology of the European Union, whose 2004 constitution states that Europe is ‘united in diversity’ and that the different peoples of Europe remain ‘proud of their own national identities’. The value of preserving the unique communal experiences of the German nation enables even liberal Germans to oppose opening the floodgates of immigration.
Of course the alliance of liberalism with nationalism hardly solved all conundrums, while at the same time it created a host of new ones. How do you compare the value of communal experiences with that of individual experiences? Does preserving polka, bratwurst and the German language justify leaving millions of refugees exposed to poverty and possibly even death? And what happens when fundamental conflicts erupt within nations about the very definition of their identity, as happened in Germany in 1933, in the USA in 1861, in Spain in 1936 or in Egypt in 2011? In such cases holding democratic elections is hardly a cure-all, because the opposing parties have no reason to respect the results.
Lastly, as you dance the nationalist polka, a small but momentous step may take you from believing that your nation is different from all other nations to believing that your nation is better. Nineteenth-century liberal nationalism required the Habsburg and tsarist empires to respect the unique experiences of Germans, Italians, Poles and Slovenes. Twentieth-century ultra-nationalism proceeded to wage wars of conquest and build concentration camps for people who danced to a different tune.
Socialist humanism has taken a very different course. Socialists blame liberals for focusing our attention on our own feelings instead of on what other people experience. Yes, the human experience is the source of all meaning, but there are billions of people in the world and all of them are just as valuable as I am. Whereas liberalism turns my gaze inwards, emphasising my uniqueness and the uniqueness of my nation, socialism demands that I stop obsessing about me and my feelings and instead focus on what others are feeling and how my actions influence their experiences. Global peace will be achieved not by celebrating the distinctiveness of each nation, but by unifying all the workers of the world; and social harmony won’t be achieved by each person narcissistically exploring their own inner depths, but rather by each person prioritising the needs and experiences of others over their own desires.
A liberal may counter that by exploring her own inner world she develops her compassion and her understanding of others. But such reasoning would have cut little ice with Lenin or Mao. They would have explained that individual self-exploration is an indulgent bourgeois vice, and that when I try to get in touch with my inner self, I am more than likely to fall into one or another capitalist trap. My current political views, my likes and dislikes, and my hobbies and ambitions do not reflect my authentic self. Rather, they reflect my upbringing and social surroundings. They depend on my class, and are shaped by my neighbourhood and my school. Rich and poor alike are brainwashed from birth. The rich are taught to disregard the poor, while the poor are taught to disregard their true interests. No amount of self-reflection or psychotherapy will help, because the psychotherapists are also working for the capitalist system.
Indeed, self-reflection is likely only to distance me even further from understanding the truth about myself, because it gives too much consideration to personal decisions and not enough to social conditions. If I am rich, I conclude that it is because I made shrewd choices. If I am mired in poverty, I must have made some mistakes. If I am depressed, a liberal therapist is likely to blame my parents, and to encourage me to set some new aims in life. If I suggest that perhaps I am depressed because I am being exploited by capitalists, and because under the prevailing social system I have no chance of realising my aims, the therapist may well say that I am projecting onto ‘the social system’ my own inner difficulties, and I am projecting onto ‘the capitalists’ unresolved issues with my mother.
According to socialism, instead of spending years talking about my mother, my emotions and my complexes, I should ask myself: who owns the means of production in my country? What are its main exports and imports? What’s the connection between the ruling politicians and international banking? Only by understanding the prevailing socio-economic system and taking into account the experiences of all other people can I truly understand what I feel, and only by common action can we change the system. Yet what person can take into account the experiences of all human beings, and weigh them one against the other in a fair way?
That’s why socialists discourage self-exploration and advocate the establishment of strong collective institutions – such as socialist parties and trade unions – that aim to decipher the world for us. Whereas in liberal politics the voter knows best, and in liberal economics the customer is always right, in socialist politics the party knows best, and in socialist economics the trade union is always right. Authority and meaning still come from human experience – both the party and the trade union are composed of people and work to alleviate human misery – yet individuals must listen to the party and the trade union rather than to their personal feelings.
Evolutionary humanism has a different solution to the problem of conflicting human experiences. Rooting itself in the firm ground of Darwinian evolutionary theory, it insists that conflict is something to applaud rather than lament. Conflict is the raw material of natural selection, which pushes evolution forward. Some humans are simply superior to others, and when human experiences collide, the fittest humans should steamroll everyone else. The same logic that drives humankind to exterminate wild wolves and to ruthlessly exploit domesticated sheep also mandates the oppression of inferior humans by their superiors. It’s a good thing that Europeans conquer Africans and that shrewd businessmen drive the dim-witted to bankruptcy. If we follow this evolutionary logic, humankind will gradually become stronger and fitter, eventually giving rise to superhumans. Evolution didn’t stop with Homo sapiens – there is still a long way to go. However, if in the name of human rights or human equality we emasculate the fittest humans, it will prevent the rise of the superman, and may even cause the degeneration and extinction of Homo sapiens.
Who exactly are these superior humans who herald the coming of the superman? They might be entire races, particular tribes or exceptional individual geniuses. Whoever they may be, what makes them superior is that they have better abilities, manifested in the creation of new knowledge, more advanced technology, more prosperous societies or more beautiful art. The experience of an Einstein or a Beethoven is far more valuable than that of a drunken good-for-nothing, and it is ludicrous to treat them as if they have equal merit. Similarly, if a particular nation has consistently spearheaded human progress, we should rightly consider it superior to other nations that contributed little or nothing to the evolution of humankind.
Consequently, in contrast to liberal artists like Otto Dix, evolutionary humanism maintains that the human experience of war is valuable and even essential. The movie The Third Man is set in Vienna immediately after the end of the Second World War. Reflecting on the recent conflict the character Harry Lime says: ‘After all, it’s not that awful . . . In Italy for thirty years under the Borgias they had warfare, terror, murder and bloodshed, but they produced Michelangelo, Leonardo da Vinci and the Renaissance. In Switzerland they had brotherly love, they had 500 years of democracy and peace, and what did that produce? The cuckoo clock.’ Lime gets almost all his facts wrong – Switzerland was probably the most bloodthirsty corner of early modern Europe (its main export was mercenary soldiers), and the cuckoo clock was actually invented by Germans – but the facts are of lesser importance than Lime’s idea, namely that the experience of war pushes humankind to new achievements. War allows natural selection free rein at last. It exterminates the weak and rewards the fierce and the ambitious. War exposes the truth about life, and awakens the will for power, for glory and for conquest. Nietzsche summed it up by saying that war is ‘the school of life’ and that ‘what does not kill me makes me stronger’.
Similar ideas were expressed by Lieutenant Henry Jones of the British army. Three days before his death on the Western Front in the First World War, the twenty-one-year-old Jones sent a letter to his brother, describing his experience of war in glowing terms:
Have you ever reflected on the fact that, despite the horrors of war, it is at least a big thing? I mean to say that in it one is brought face to face with realities. The follies, selfishness, luxury and general pettiness of the vile commercial sort of existence led by nine-tenths of the people of the world in peacetime are replaced in war by a savagery that is at least more honest and outspoken. Look at it this way: in peacetime one just lives one’s own little life, engaged in trivialities, worrying about one’s own comfort, about money matters, and all that sort of thing – just living for one’s own self. What a sordid life it is! In war, on the other hand, even if you do get killed you only anticipate the inevitable by a few years in any case, and you have the satisfaction of knowing that you have ‘pegged out’ in the attempt to help your country. You have, in fact, realised an ideal, which, as far as I can see, you very rarely do in ordinary life. The reason is that ordinary life runs on a commercial and selfish basis; if you want to ‘get on’, as the saying is, you can’t keep your hands clean.
Personally, I often rejoice that the War has come my way. It has made me realise what a petty thing life is. I think that the War has given to everyone a chance to ‘get out of himself’, as I might say . . . Certainly, speaking for myself, I can say that I have never in all my life experienced such a wild exhilaration as on the commencement of a big stunt, like the last April one for example. The excitement for the last half-hour or so before it is like nothing on earth.9
In his bestseller Black Hawk Down the journalist Mark Bowden relates in similar terms the combat experience of Shawn Nelson, an American soldier, in Mogadishu in 1993:
It was hard to describe how he felt . . . it was like an epiphany. Close to death, he had never felt so completely alive. There had been split seconds in his life when he’d felt death brush past, like when another fast-moving car veered from around a sharp curve and just missed hitting him head on. On this day he had lived with that feeling, with death breathing right in his face . . . for moment after moment after moment, for three hours or more . . . Combat was . . . a state of complete mental and physical awareness. In those hours on the street he had not been Shawn Nelson, he had no connection to the larger world, no bills to pay, no emotional ties, nothing. He had just been a human being staying alive from one nanosecond to the next, drawing one breath after another, fully aware that each one might be his last. He felt he would never be the same.10
Adolf Hitler too was changed and enlightened by his war experiences. In Mein Kampf he relates how, shortly after his unit reached the front line, the soldiers’ initial enthusiasm turned into fear, against which each soldier had to wage a relentless inner war, straining every nerve to avoid being overwhelmed by it. Hitler says that he won this inner war by the winter of 1915/16. ‘At last,’ he writes, ‘my will was undisputed master . . . I was now calm and determined. And this was enduring. Now Fate could bring on the ultimate tests without my nerves shattering or my reason failing.’11
The experience of war revealed to Hitler the truth about the world: it’s a jungle run by the remorseless laws of natural selection. Those who refuse to recognise this truth cannot survive. If you wish to succeed, you must not only understand the laws of the jungle, but embrace them joyfully. It should be stressed that just like anti-war liberal artists, Hitler too sanctified the experience of ordinary soldiers. Indeed, Hitler’s political career is one of the best examples we have for the immense authority accorded to the personal experience of common people in twentieth-century politics. Hitler wasn’t a senior officer – in four years of war, he rose no higher than the rank of corporal. He had no formal education, no professional skills and no political background. He wasn’t a successful businessman or a union activist, he didn’t have friends or relatives in high places, nor any money to speak of. At first, he didn’t even have German citizenship. He was a penniless immigrant.
When Hitler appealed to the German voters and asked for their trust, he could muster only one argument in his favour: his experiences in the trenches had taught him what you can never learn at university, at general headquarters or at a government ministry. People followed him and voted for him because they identified with him, and because they too believed that the world is a jungle, and that what doesn’t kill us only makes us stronger.
Whereas liberalism merged with the milder versions of nationalism to protect the unique experiences of each human community, evolutionary humanists such as Hitler identified particular nations as the engines of human progress and concluded that these nations ought to bludgeon or even exterminate anyone standing in their way. It should be remembered, though, that Hitler and the Nazis represent only one extreme version of evolutionary humanism. Just as Stalin’s gulags do not automatically nullify every socialist idea and argument, so too the horrors of Nazism should not blind us to whatever insights evolutionary humanism might offer. Nazism was born from the pairing of evolutionary humanism with particular racial theories and ultra-nationalist emotions. Not all evolutionary humanists are racists, and not every belief in humankind’s potential for further evolution necessarily calls for setting up police states and concentration camps.
Auschwitz should serve as a blood-red warning sign rather than as a black curtain that hides entire sections of the human horizon. Evolutionary humanism played an important part in the shaping of modern culture, and is likely to play an even greater role in the shaping of the twenty-first century.
To make sure that we understand the differences between the three humanist branches, let’s compare a few human experiences.
Experience no. 1: A musicology professor sits in the Vienna Opera House listening to the opening of Beethoven’s Fifth Symphony. ‘Pa pa pa PAM!’ As the sound waves hit his eardrums, signals travel via the auditory nerve to the brain and the adrenal gland floods his bloodstream with adrenaline. His heartbeat accelerates, his breathing intensifies, the hairs on his neck stand up, and a shiver runs down his spine. ‘Pa pa pa PAM!’
Experience no. 2: It’s 1965. A Mustang convertible is speeding down the Pacific Coast Highway from San Francisco to LA at full throttle. The macho young driver puts on Chuck Berry at full volume: ‘Go! Go, Johnny, go!’ As the sound waves hit his eardrums, signals travel via the auditory nerve to the brain and the adrenal gland floods his bloodstream with adrenaline. His heartbeat accelerates, his breathing intensifies, the hairs on his neck stand up, and a shiver runs down his spine. ‘Go! Go, Johnny, go, go, go!’
Experience no. 3: Deep in the Congolese rainforest, a pygmy hunter stands transfixed. From the nearby village he hears a choir of girls singing their initiation song. ‘Ye oh, oh. Ye oh, eh.’ As the sound waves hit his eardrums, signals travel via the auditory nerve to the brain and the adrenal gland floods his bloodstream with adrenaline. His heartbeat accelerates, his breathing intensifies, the hairs on his neck stand up, and a shiver runs down his spine. ‘Ye oh, oh. Ye oh, eh.’
Experience no. 4: It’s a full-moon night, somewhere in the Canadian Rockies. A wolf is standing on a hilltop listening to the howls of a female in heat. ‘Awoooooo! Awoooooo!’ As the sound waves hit his eardrums, signals travel via the auditory nerve to the brain and the adrenal gland floods his bloodstream with adrenaline. His heartbeat accelerates, his breathing intensifies, the hairs on his neck stand up, and a shiver runs down his spine. ‘Awoooooo! Awoooooo!’
Which of these four experiences is the most valuable?
Liberals will tend to say that the experiences of the musicology professor, of the young driver and of the Congolese hunter are all equally valuable, and all should be equally cherished. Every human experience contributes something unique, and enriches the world with new meaning. Some people like classical music, others love rock and roll, and still others prefer traditional African chants. Music students should be exposed to the widest possible range of genres, and at the end of the day, they can all go to the iTunes store, punch in their credit card numbers and buy whatever they like. Beauty is in the ears of the listener, and the customer is always right. The wolf, though, isn’t human, hence his experiences are far less valuable. That’s why the life of a wolf is worth less than the life of a human, and why it is perfectly okay to kill a wolf in order to save a human. When all is said and done, wolves don’t get to vote in any beauty contests, nor do they own any credit cards.
This liberal approach is manifested, for example, in the Voyager golden record. In 1977 the Americans launched the space probe Voyager I on a journey to outer space. By now it has left our solar system, making it the first man-made object to traverse interstellar space. Besides state-of-the-art scientific equipment, NASA placed on board a golden record, aimed to introduce planet Earth to any inquisitive aliens who might encounter the probe.
The record contains a variety of scientific and cultural information about Earth and its inhabitants, some images and voices, and several dozen pieces of music from around the world, which are supposed to represent a fair sampling of earthly artistic achievement. The musical sample mixes in no obvious order classical pieces including the opening movement of Beethoven’s Fifth Symphony, contemporary popular music including Chuck Berry’s ‘Johnny B. Goode’, and traditional music from throughout the world, including an initiation song of Congolese pygmy girls. Though the record also contains some canine howls, they are not part of the music sample, but rather relegated to a different section that also includes the sounds of wind, rain and surf. The message to potential listeners in Alpha Centauri is that Beethoven, Chuck Berry and the pygmy initiation song are of equal merit, whereas wolf howls belong to an altogether different category.
Socialists will probably agree with the liberals that the wolf’s experience is of little value. But their attitude towards the three human experiences will be quite different. A socialist true-believer will explain that the real value of music depends not on the experiences of the individual listener, but on the impact it has on the experiences of other people and of society as a whole. As Mao said, ‘There is no such thing as art for art’s sake, art that stands above classes, art that is detached from or independent of politics.’12
So when evaluating the musical experience socialists will focus, for example, on the fact that Beethoven wrote the Fifth Symphony for an audience of upper-class white Europeans, exactly when Europe was about to embark on its conquest of Africa. His symphony reflected Enlightenment ideals, which glorified upper-class white men, and justified the conquest of Africa as ‘the white man’s burden’.
Rock and roll – the socialists will say – was pioneered by downtrodden African American musicians who drew inspiration from genres like blues, jazz and gospel. However, in the 1950s and 1960s rock and roll was hijacked by mainstream white America, and pressed into the service of consumerism, American imperialism and Coca-Colonialism. Rock and roll was commercialised and appropriated by privileged white teenagers in their petit-bourgeois fantasy of rebellion. Chuck Berry himself bowed to the dictates of the capitalist juggernaut. While he originally sang about ‘a coloured boy named Johnny B. Goode’, under pressure from white-owned radio stations Berry changed the lyrics to ‘a country boy named Johnny B. Goode’.
As for the choir of Congolese pygmy girls – their initiation songs are part of a patriarchal power structure that brainwashes both men and women to conform to an oppressive gender order. And if a recording of such an initiation song ever makes it to the global marketplace, it merely serves to reinforce Western colonial fantasies about Africa in general and African women in particular.
So which music is best: Beethoven’s Fifth, ‘Johnny B. Goode’ or the pygmy initiation song? Should the government finance the building of opera houses, rock and roll venues or African-heritage exhibitions? And what should we teach music students in schools and colleges? Well, don’t ask me. Ask the party’s cultural commissar.
Whereas liberals tiptoe around the minefield of cultural comparisons, fearful of committing some politically incorrect faux pas, and whereas socialists leave it to the party to find the correct path through this minefield, evolutionary humanists gleefully jump right in, setting off all the mines and relishing the mayhem. They may start by pointing out that both liberals and socialists draw the line at other animals, and have no trouble admitting that humans are superior to wolves, and that consequently human music is far more valuable than wolf howls. Yet humankind itself is not exempt from the forces of evolution. Just as humans are superior to wolves, so some human cultures are more advanced than others. There is an unambiguous hierarchy of human experiences, and we shouldn’t be apologetic about it. The Taj Mahal is more beautiful than a straw hut, Michelangelo’s David is superior to my five-year-old niece’s latest clay figurine, and Beethoven composed far better music than Chuck Berry or the Congolese pygmies. There, we’ve said it!
According to evolutionary humanists, anyone arguing that all human experiences are equally valuable is either an imbecile or a coward. Such vulgarity and timidity will lead only to the degeneration and extinction of humankind, as human progress is impeded in the name of cultural relativism or social equality. If liberals or socialists had lived in the Stone Age, they would probably have seen little merit in the murals of Lascaux and Altamira, and would have insisted that they were in no way superior to Neanderthal doodles.
Initially the differences between liberal humanism, socialist humanism and evolutionary humanism seemed rather frivolous. Set against the enormous gap separating all humanist sects from Christianity, Islam or Hinduism, the arguments between different versions of humanism were trifling. As long as we all agree that God is dead and that only the human experience gives meaning to the universe, does it really matter whether we think that all human experiences are equal or that some are superior to others? Yet as humanism conquered the world, these internal schisms widened, and eventually flared up into the deadliest religious war in history.
In the first decade of the twentieth century, the liberal orthodoxy was still confident of its strength. Liberals were convinced that if individuals had maximum freedom to express themselves and follow their hearts, the world would enjoy unprecedented peace and prosperity. It might take time to completely dismantle the fetters of traditional hierarchies, obscurantist religions and brutal empires, but every decade would bring new liberties and achievements, and eventually we would create paradise on earth. In the halcyon days of June 1914, liberals thought history was on their side.
By Christmas 1914 liberals were shell-shocked, and in the following decades their ideas were subjected to a double assault from both left and right. Socialists argued that liberalism is in fact a fig leaf for a ruthless, exploitative and racist system. For vaunted ‘liberty’, read ‘property’. The defence of the individual’s right to do what feels good amounts in most cases to safeguarding the property and privileges of the middle and upper classes. What good is the liberty to live where you want when you cannot pay the rent; to study what interests you when you cannot afford the tuition fees; and to travel where you fancy when you cannot buy a car? Under liberalism, went a famous quip, everyone is free to starve. Even worse, by encouraging people to view themselves as isolated individuals, liberalism separates them from their fellow class members and prevents them from uniting against the system that oppresses them. Liberalism thereby perpetuates inequality, condemning the masses to poverty and the elite to alienation.
While liberalism staggered under this punch from the left, evolutionary humanism struck from the right. Racists and fascists blamed both liberalism and socialism for subverting natural selection and causing the degeneration of humankind. They warned that if all humans were given equal value and equal breeding opportunities, natural selection would cease to function. The fittest humans would be submerged in an ocean of mediocrity, and instead of evolving into supermen, humankind would become extinct.
From 1914 to 1989 a murderous war of religion raged between the three humanist sects, and liberalism at first sustained one defeat after the other. Not only did communist and fascist regimes take over numerous countries, but the core liberal ideas were exposed as naïve at best, if not downright dangerous. Just give freedom to individuals and the world will enjoy peace and prosperity? Yeah, right.
The Second World War, which with hindsight we remember as a great liberal victory, hardly looked like that at the time. The war began in September 1939 as a conflict between a mighty liberal alliance and an isolated Nazi Germany. (Even Fascist Italy preferred to play a waiting game until June of the following year.) The liberal alliance enjoyed overwhelming numerical and economic superiority. While German GDP in 1940 stood at $387 million, the GDP of Germany’s European opponents totalled $631 million (not including the GDP of the overseas British dominions and of the British, French, Dutch and Belgian empires). Still, in the spring of 1940 it took Germany a mere three months to deal the liberal alliance a decisive blow, occupying France, the Low Countries, Norway and Denmark. The UK was saved from a similar fate only by the English Channel.13
The Germans were eventually beaten only after the liberal countries allied themselves with the Soviet Union, which bore the brunt of the conflict and paid a much higher price: 25 million Soviet citizens died in the war, compared to half a million Britons and half a million Americans. Much of the credit for defeating Nazism should be given to communism. And at least in the short term, communism was also the great beneficiary of the war.
The Soviet Union entered the war as an isolated communist pariah. It emerged as one of the two global superpowers and the leader of an expanding international bloc. By 1949 eastern Europe became a Soviet satellite, the Chinese Communist Party had won the Chinese Civil War, and the United States was gripped by anti-communist hysteria. Revolutionary and anti-colonial movements throughout the world looked longingly towards Moscow and Beijing, while liberalism became identified with the racist European empires. As these empires collapsed, they were usually replaced by either military dictatorships or socialist regimes, not liberal democracies. In 1956 the Soviet premier, Nikita Khrushchev, confidently boasted to the liberal West that ‘Whether you like it or not, history is on our side. We will bury you!’
Khrushchev sincerely believed this, as did increasing numbers of Third World leaders and First World intellectuals. In the 1960s and 1970s the word ‘liberal’ became a term of abuse in many Western universities. North America and western Europe experienced growing social unrest, as radical left-wing movements strove to undermine the liberal order. Students in Cambridge, the Sorbonne and the People’s Republic of Berkeley thumbed through Chairman Mao’s Little Red Book and hung Che Guevara’s heroic portrait over their beds. In 1968 the wave crested with the outbreak of protests and riots all over the Western world. Mexican security forces killed dozens of students in the notorious Tlatelolco Massacre, students in Rome fought the Italian police in the so-called Battle of Valle Giulia, and the assassination of Martin Luther King sparked days of riots and protests in more than a hundred American cities. In May students took over the streets of Paris, President de Gaulle fled to a French military base in Germany, and well-to-do French citizens trembled in their beds, having guillotine nightmares.
By 1970 the world contained 130 independent countries, but only thirty of these were liberal democracies, most of which were crammed into the north-western corner of Europe. India was the only important Third World country that committed to the liberal path after securing its independence, but even India distanced itself from the Western bloc and leaned towards the Soviets.
In 1975 the liberal camp suffered its most humiliating defeat of all: the Vietnam War ended with the North Vietnamese David overcoming the American Goliath. In quick succession communism took over South Vietnam, Laos and Cambodia. On 17 April 1975 the Cambodian capital, Phnom Penh, fell to the Khmer Rouge. Two weeks later people all over the world watched on TV as helicopters evacuated the last Yankees from the rooftop of the American Embassy in Saigon. Many were certain that the American Empire was falling. Before anyone could say ‘domino theory’, in June Indira Gandhi proclaimed the Emergency in India, and it seemed that the world’s largest democracy was on its way to becoming yet another socialist dictatorship.
Liberal democracy increasingly looked like an exclusive club for ageing white imperialists, who had little to offer the rest of the world, or even to their own youth. Washington hailed itself as the leader of the free world, but most of its allies were either authoritarian kings (such as King Khaled of Saudi Arabia, King Hassan of Morocco and the Persian shah) or military dictators (such as the Greek colonels, General Pinochet in Chile, General Franco in Spain, General Park in South Korea, General Geisel in Brazil and Generalissimo Chiang Kai-shek in Taiwan).
Despite the support of all these kings and generals, militarily the Warsaw Pact had a huge numerical superiority over NATO. In order to reach parity in conventional armaments, Western countries would probably have had to scrap liberal democracy and the free market, and become totalitarian states on a permanent war footing. Liberal democracy was saved only by nuclear weapons. NATO adopted the MAD doctrine (Mutual Assured Destruction), according to which even conventional Soviet attacks would be answered by an all-out nuclear strike. ‘If you attack us,’ threatened the liberals, ‘we will make sure nobody comes out alive.’ Behind this monstrous shield, liberal democracy and the free market managed to hold out in their last bastions, and Westerners got to enjoy sex, drugs and rock and roll, as well as washing machines, refrigerators and televisions. Without nukes there would have been no Beatles, no Woodstock and no overflowing supermarkets. But in the mid-1970s it seemed that nuclear weapons notwithstanding, the future belonged to socialism.
38. The evacuation of the American Embassy in Saigon.
38. © Bettmann/Corbis.
And then everything changed. Liberal democracy crawled out of history’s dustbin, cleaned itself up and conquered the world. The supermarket proved to be far stronger than the gulag. The blitzkrieg began in southern Europe where the authoritarian regimes in Greece, Spain and Portugal collapsed, giving way to democratic governments. In 1977 Indira Gandhi ended the Emergency, re-establishing democracy in India. During the 1980s military dictatorships in East Asia and Latin America were replaced by democratic governments in countries such as Brazil, Argentina, Taiwan and South Korea. In the late 1980s and early 1990s the liberal wave turned into a veritable tsunami, sweeping away the mighty Soviet Empire and raising expectations of the coming end of history. After decades of defeats and setbacks, liberalism won a decisive victory in the Cold War, emerging triumphant from the humanist wars of religion, albeit a bit worse for wear.
As the Soviet Empire imploded, liberal democracies replaced communist regimes not only in eastern Europe, but also in many of the former Soviet republics, such as the Baltic States, Ukraine, Georgia and Armenia. Even Russia nowadays pretends to be a democracy. Victory in the Cold War gave renewed impetus to the spread of the liberal model elsewhere around the world, most notably in Latin America, South Asia and Africa. Some liberal experiments ended in abject failure, but the number of success stories is impressive. Indonesia, Nigeria and Chile, for instance, had been ruled by military strongmen for decades, but all are now functioning democracies.
If a liberal had fallen asleep in June 1914 and awakened in June 2014, he or she would have felt very much at home. Once again people believe that if you just give individuals more freedom, the world will enjoy peace and prosperity. The entire twentieth century looks like a big mistake. Back in the spring of 1914 humankind was speeding on the liberal highway when it took a wrong turn and entered a cul-de-sac. It then required eight decades and three horrendous global wars to find its way back to the highway. Of course, these decades were not a total waste; they did give us antibiotics, nuclear energy and computers, as well as feminism, de-colonialism and free sex. In addition, liberalism itself smarted from the experience and is less conceited than it was a century ago. It has adopted various ideas and institutions from its socialist and fascist rivals, in particular a commitment to provide the general public with education, health and welfare services. Yet the core liberal package has changed surprisingly little. Liberalism still sanctifies individual liberties above all, and still has a firm belief in the voter and the customer. In the early twenty-first century, this is the only show in town.
As of 2016 there is no serious alternative to the liberal package of individualism, human rights, democracy and a free market. The social protests that swept the Western world in 2011 – such as Occupy Wall Street and the Spanish 15-M movement – have absolutely nothing against democracy, individualism and human rights, or even against the basic principles of free-market economics. Just the opposite – they take governments to task for not living up to these liberal ideals. They demand that the market be really free, instead of being controlled and manipulated by corporations and banks ‘too big to fail’. They call for truly representative democratic institutions that will serve the interests of ordinary citizens rather than of moneyed lobbyists and powerful interest groups. Even those blasting stock exchanges and parliaments with the harshest criticism don’t have a viable alternative model for running the world. While it is a favourite pastime of Western academics and activists to find fault with the liberal package, they have so far failed to come up with anything better.
China seems to offer a much more serious challenge than Western social protestors. Despite liberalising its politics and economics, China is neither a democracy nor a truly free-market economy, which does not prevent it from becoming the economic giant of the twenty-first century. Yet this economic giant casts a very small ideological shadow. Nobody seems to know what the Chinese believe these days – including the Chinese themselves. In theory China is still communist, but in practice it is nothing of the kind. Some Chinese thinkers and leaders toy with a return to Confucianism, but that’s hardly more than a convenient facade. This ideological vacuum makes China the most promising breeding ground for the new techno-religions emerging from Silicon Valley (which we will discuss in the following chapters). But these techno-religions, with their belief in immortality and virtual paradises, will take at least a decade or two to establish themselves. Hence at present China doesn’t pose a real alternative to liberalism. For bankrupt Greeks despairing of the liberal model and searching for a substitute, ‘imitating the Chinese’ isn’t a viable option.
How about radical Islam, then? Or fundamentalist Christianity, messianic Judaism or revivalist Hinduism? Whereas the Chinese don’t know what they believe, religious fundamentalists know only too well. More than a century after Nietzsche pronounced Him dead, God seems to be making a comeback. But this is a mirage. God is dead – it’s just taking a while to get rid of the body. Radical Islam poses no serious threat to the liberal package, because for all their fervour the zealots don’t really understand the world of the twenty-first century, and have nothing relevant to say about the novel dangers and opportunities that new technologies are generating all around us.
Religion and technology always dance a delicate tango. They push one another, depend on one another and cannot stray too far away from one another. Technology depends on religion because every invention has many potential applications, and the engineers need some prophet to make the crucial choices and point towards the required destination. Thus in the nineteenth century engineers invented locomotives, radios and internal combustion engines. But as the twentieth century proved, you can use these very same tools to create fascist societies, communist dictatorships and liberal democracies. Without religious convictions, the locomotives cannot decide which way to go.
On the other hand, technology often defines the scope and limits of our religious visions, like a waiter that demarcates our appetites by handing us a menu. New technologies kill old gods and give birth to new gods. That’s why agricultural deities were different from hunter-gatherer spirits, why factory hands fantasised about different paradises than peasants and why the revolutionary technologies of the twenty-first century are far more likely to spawn unprecedented religious movements than to revive medieval creeds. Islamic fundamentalists may repeat the mantra that ‘Islam is the answer’, but religions that lose touch with the technological realities of the day forfeit their ability even to understand the questions being asked. What will happen to the job market once artificial intelligence outperforms humans in most cognitive tasks? What will be the political impact of a massive new class of economically useless people? What will happen to relationships, families and pension funds when nanotechnology and regenerative medicine turn eighty into the new fifty? What will happen to human society when biotechnology enables us to have designer babies, and to open unprecedented gaps between rich and poor?
You will not find the answers to any of these questions in the Qur’an or sharia law, nor in the Bible or in the Confucian Analects, because nobody in the medieval Middle East or in ancient China knew much about computers, genetics or nanotechnology. Radical Islam may promise an anchor of certainty in a world of technological and economic storms – but in order to navigate a storm you need a map and a rudder rather than just an anchor. Hence radical Islam may appeal to people born and raised in its fold, but it has precious little to offer unemployed Spanish youths or anxious Chinese billionaires.
True, hundreds of millions may nevertheless go on believing in Islam, Christianity or Hinduism. But numbers alone don’t count for much in history. History is often shaped by small groups of forward-looking innovators rather than by the backward-looking masses. Ten thousand years ago most people were hunter-gatherers and only a few pioneers in the Middle East were farmers. Yet the future belonged to the farmers. In 1850 more than 90 per cent of humans were peasants, and in the small villages along the Ganges, the Nile and the Yangtze nobody knew anything about steam engines, railroads or telegraph lines. Yet the fate of those peasants had already been sealed in Manchester and Birmingham by the handful of engineers, politicians and financiers who spearheaded the Industrial Revolution. Steam engines, railroads and telegraphs transformed the production of food, textiles, vehicles and weapons, giving industrial powers a decisive edge over traditional agricultural societies.
Even when the Industrial Revolution spread around the world and penetrated up the Ganges, Nile and Yangtze, most people continued to believe in the Vedas, the Bible, the Qur’an and the Analects more than in the steam engine. As today, so too in the nineteenth century there was no shortage of priests, mystics and gurus who argued that they alone held the solution to all of humanity’s woes, including to the new problems created by the Industrial Revolution. For example, between the 1820s and 1880s Egypt (backed by Britain) conquered Sudan and tried to modernise the country and incorporate it into the new international trade network. This destabilised traditional Sudanese society, creating widespread resentment and fostering revolts. In 1881 a local religious leader, Muhammad Ahmad bin Abdallah, declared that he was the Mahdi (the Messiah), sent to establish God’s law on earth. His supporters defeated the Anglo-Egyptian army and beheaded its commander – General Charles Gordon – in a gesture that shocked Victorian Britain. They then established in Sudan an Islamic theocracy governed by sharia law, which lasted until 1898.
Meanwhile in India, Dayananda Saraswati headed a Hindu revival movement, whose basic principle was that the Vedic scriptures are never wrong. In 1875 he founded the Arya Samaj (Noble Society), dedicated to the spreading of Vedic knowledge – though truth be told, Dayananda often interpreted the Vedas in a surprisingly liberal way, supporting, for example, equal rights for women long before the idea became popular in the West.
Dayananda’s contemporary, Pope Pius IX, had much more conservative views about women, but shared Dayananda’s admiration for superhuman authority. Pius led a series of reforms in Catholic dogma and established the novel principle of papal infallibility, according to which the Pope can never err in matters of faith (this seemingly medieval idea became binding Catholic dogma only in 1870, eleven years after Charles Darwin published On the Origin of Species).
Thirty years before the Pope discovered that he is incapable of making mistakes, a failed Chinese scholar called Hong Xiuquan had a succession of religious visions. In these visions God revealed that Hong was none other than the younger brother of Jesus Christ. God then invested Hong with a divine mission. He told Hong to expel the Manchu ‘demons’ that had ruled China since the seventeenth century, and establish on earth the Great Peaceful Kingdom of Heaven (Taiping Tiānguó). Hong’s message fired the imagination of millions of desperate Chinese, who were shaken by China’s defeats in the Opium Wars and by the coming of modern industry and European imperialism. But Hong did not lead them to a kingdom of peace. Rather, he led them against the Manchu Qing dynasty in the Taiping Rebellion – the deadliest war of the nineteenth century, which lasted from 1850 to 1864. At least 20 million people lost their lives, far more than in the Napoleonic Wars or in the American Civil War.
Hundreds of millions clung to the religious dogmas of Hong, Dayananda, Pius and the Mahdi even as industrial factories, railroads and steamships filled the world. Yet most of us don’t think about the nineteenth century as the age of faith. When we think of nineteenth-century visionaries we are far more likely to recall Marx, Engels and Lenin than the Mahdi, Pius IX or Hong Xiuquan. And rightly so. Though in 1850 socialism was only a fringe movement, it soon gathered momentum and changed the world in far more profound ways than the self-proclaimed messiahs of China and Sudan. If you value national health services, pension funds and free schools, you need to thank Marx and Lenin (and Otto von Bismarck) far more than Hong Xiuquan or the Mahdi.
Why did Marx and Lenin succeed where Hong and the Mahdi failed? Not because socialist humanism was philosophically more sophisticated than Islamic and Christian theology, but rather because Marx and Lenin devoted more attention to understanding the technological and economic realities of their time than to scrutinising ancient texts and prophetic dreams. Steam engines, railroads, telegraphs and electricity created unheard-of problems as well as unprecedented opportunities. The experiences, needs and hopes of the new class of urban proletariats were simply too different from those of biblical peasants. To answer these needs and hopes, Marx and Lenin studied how a steam engine functions, how a coal mine operates, how railroads shape the economy and how electricity influences politics.
Lenin was once asked to define communism in a single sentence. ‘Communism is power to worker councils,’ he said, ‘plus electrification of the whole country.’ There can be no communism without electricity, without railroads, without radio. You couldn’t have established a communist regime in sixteenth-century Russia, because communism necessitates the concentration of information and resources in one hub. ‘From each according to his ability, to each according to his needs’ only works when produce can easily be collected and distributed across vast distances, and when activities can be monitored and coordinated over entire countries.
Marx and his followers understood the new technological realities and the new human experiences, so they had relevant answers to the new problems of industrial society, as well as original ideas about how to benefit from the unprecedented opportunities. The socialists created a brave new religion for a brave new world. They promised salvation through technology and economics, thus establishing the first techno-religion in history, and changing the foundations of ideological discourse. Before Marx, people defined and divided themselves according to their views about God, not about production methods. Since Marx, questions of technology and economic structure became far more important and divisive than debates about the soul and the afterlife. In the second half of the twentieth century humankind almost obliterated itself in an argument about production methods. Even the harshest critics of Marx and Lenin adopted their basic attitude towards history and society, and began thinking much more carefully about technology and production than about God and heaven.
In the mid-nineteenth century few people were as perceptive as Marx, hence only a few countries underwent rapid industrialisation. These few countries conquered the world. Most societies failed to understand what was happening, and therefore missed the train of progress. Dayananda’s India and the Mahdi’s Sudan remained far more preoccupied with God than with steam engines, hence they were occupied and exploited by industrial Britain. Only in the last few years has India managed to make significant progress in closing the economic and geopolitical gap separating it from Britain. Sudan is still struggling, far behind.
In the early twenty-first century the train of progress is again pulling out of the station – and this will probably be the last train ever to leave the station called Homo sapiens. Those who miss this train will never get a second chance. In order to get a seat on it you need to understand twenty-first-century technology, and in particular the powers of biotechnology and computer algorithms. These powers are far more potent than steam and the telegraph, and they will not be used merely for the production of food, textiles, vehicles and weapons. The main products of the twenty-first century will be bodies, brains and minds, and the gap between those who know how to engineer bodies and brains and those who do not will be far bigger than the gap between Dickens’s Britain and the Mahdi’s Sudan. Indeed, it will be bigger than the gap between Sapiens and Neanderthals. In the twenty-first century, those who ride the train of progress will acquire divine abilities of creation and destruction, while those left behind will face extinction.
Socialism, which was very up to date a hundred years ago, failed to keep up with new technology. Leonid Brezhnev and Fidel Castro held on to ideas that Marx and Lenin formulated in the age of steam, and did not understand the power of computers and biotechnology. Liberals, in contrast, adapted far better to the information age. This partly explains why Khrushchev’s 1956 prediction never materialised, and why it was the liberal capitalists who eventually buried the Marxists. If Marx came back to life today, he would probably urge his few remaining disciples to devote less time to reading Das Kapital and more time to studying the Internet and the human genome.
Radical Islam is in a far worse position than socialism. It has not yet come to terms even with the Industrial Revolution – no wonder it has little of relevance to say about genetic engineering and artificial intelligence. Islam, Christianity and other traditional religions are still important players in the world. Yet their role is now largely reactive. In the past, they were a creative force. Christianity, for example, spread the hitherto heretical notion that all humans are equal before God, thereby changing human political structures, social hierarchies and even gender relations. In his Sermon on the Mount Jesus went further, insisting that the meek and oppressed are God’s favourite people, thus turning the pyramid of power on its head, and providing ammunition for generations of revolutionaries.
In addition to social and ethical reforms, Christianity was responsible for important economic and technological innovations. The Catholic Church established medieval Europe’s most sophisticated administrative system, and pioneered the use of archives, catalogues, timetables and other techniques of data processing. The Vatican was the closest thing twelfth-century Europe had to Silicon Valley. The Church established Europe’s first economic corporations – the monasteries – which for 1,000 years spearheaded the European economy and introduced advanced agricultural and administrative methods. Monasteries were the first institutions to use clocks, and for centuries they and the cathedral schools were the most important learning centres of Europe, helping to found many of Europe’s first universities, such as Bologna, Oxford and Salamanca.
Today the Catholic Church continues to enjoy the loyalties and tithes of hundreds of millions of followers. Yet it and the other theist religions have long since turned from creative into reactive forces. They are busy with rearguard holding operations more than with pioneering novel technologies, innovative economic methods or groundbreaking social ideas. They now mostly agonise over the technologies, methods and ideas propagated by other movements. Biologists invent the contraceptive pill – and the Pope doesn’t know what to do about it. Computer scientists develop the Internet – and rabbis argue whether orthodox Jews should be allowed to surf it. Feminist thinkers call upon women to take possession of their bodies – and learned muftis debate how to confront such incendiary ideas.
Ask yourself: what was the most influential discovery, invention or creation of the twentieth century? That’s a difficult question, because it is hard to choose from a long list of candidates, including scientific discoveries such as antibiotics, technological inventions such as computers, and ideological creations such as feminism. Now ask yourself: what was the most influential discovery, invention or creation of traditional religions such as Islam and Christianity in the twentieth century? This too is a very difficult question, because there is so little to choose from. What did priests, rabbis and muftis discover in the twentieth century that can be mentioned in the same breath as antibiotics, computers or feminism? Having mulled over these two questions, from where do you think the big changes of the twenty-first century will emerge: from the Islamic State, or Google? Yes, the Islamic State knows how to put videos on YouTube; but leaving aside the industry of torture, what new inventions have emerged from Syria or Iraq lately?
Billions of people, including many scientists, continue to use religious scriptures as a source of authority, but these texts are no longer a source of creativity. Think, for example, about the acceptance of gay marriage or female clergy by the more progressive branches of Christianity. Where did this acceptance originate? Not from reading the Bible, St Augustine or Martin Luther. Rather, it came from reading texts like Michel Foucault’s The History of Sexuality or Donna Haraway’s ‘A Cyborg Manifesto’.14 Yet Christian true-believers – however progressive – cannot admit to drawing their ethics from Foucault and Haraway. So they go back to the Bible, to St Augustine and to Martin Luther, and make a very thorough search. They read page after page and story after story with the utmost attention, until they finally discover what they need: some maxim, parable or ruling that, if interpreted creatively enough means God blesses gay marriages and women can be ordained to the priesthood. They then pretend the idea originated in the Bible, when in fact it originated with Foucault. The Bible is kept as a source of authority, even though it is no longer a true source of inspiration.
That’s why traditional religions offer no real alternative to liberalism. Their scriptures don’t have anything to say about genetic engineering or artificial intelligence, and most priests, rabbis and muftis don’t understand the latest breakthroughs in biology and computer science. For if you want to understand these breakthroughs, you don’t have much choice – you need to spend time reading scientific articles and conducting lab experiments instead of memorising and debating ancient texts.
That doesn’t mean liberalism can rest on its laurels. True, it has won the humanist wars of religion, and as of 2016 it has no viable alternative. But its very success may contain the seeds of its ruin. The triumphant liberal ideals are now pushing humankind to reach for immortality, bliss and divinity. Egged on by the allegedly infallible wishes of customers and voters, scientists and engineers devote more and more energies to these liberal projects. Yet what the scientists are discovering and what the engineers are developing may unwittingly expose both the inherent flaws in the liberal world view and the blindness of customers and voters. When genetic engineering and artificial intelligence reveal their full potential, liberalism, democracy and free markets might become as obsolete as flint knives, tape cassettes, Islam and communism.
This book began by forecasting that in the twenty-first century, humans will try to attain immortality, bliss and divinity. This forecast isn’t very original or far-sighted. It simply reflects the traditional ideals of liberal humanism. Since humanism has long sanctified the life, the emotions and the desires of human beings, it’s hardly surprising that a humanist civilisation will want to maximise human lifespans, human happiness and human power. Yet the third and final part of the book will argue that attempting to realise this humanist dream will undermine its very foundations by unleashing new post-humanist technologies. The humanist belief in feelings has enabled us to benefit from the fruits of the modern covenant without paying its price. We don’t need any gods to limit our power and give us meaning – the free choices of customers and voters supply us with all the meaning we require. What, then, will happen once we realise that customers and voters never make free choices, and once we have the technology to calculate, design or outsmart their feelings? If the whole universe is pegged to the human experience, what will happen once the human experience becomes just another designable product, no different in essence from any other item in the supermarket?
39. Brains as computers – computers as brains. Artificial intelligence is now poised to surpass human intelligence.
39. © VLADGRIN/Shutterstock.com.
Can humans go on running the world and giving it meaning?
How do biotechnology and artificial intelligence threaten humanism?
Who might inherit humankind, and what new religion might replace humanism?
In 2016 the world is dominated by the liberal package of individualism, human rights, democracy and the free market. Yet twenty-first-century science is undermining the foundations of the liberal order. Because science does not deal with questions of value, it cannot determine whether liberals are right in valuing liberty more than equality, or in valuing the individual more than the collective. However, like every other religion, liberalism too is based not only on abstract ethical judgments, but also on what it believes to be factual statements. And these factual statements just don’t stand up to rigorous scientific scrutiny.
Liberals value individual liberty so much because they believe that humans have free will. According to liberalism the decisions of voters and customers are neither deterministic nor random. People are of course influenced by external forces and chance events, but at the end of the day each of us can wave the magic wand of freedom and decide things for ourselves. This is the reason liberalism gives so much importance to voters and customers, and instructs us to follow our heart and do what feels good. It is our free will that imbues the universe with meaning, and since no outsider can know how you really feel or predict your choices for sure, you shouldn’t trust any Big Brother to look after your interests and desires.
Attributing free will to humans is not an ethical judgement – it purports to be a factual description of the world. Although this so-called factual description might have made sense back in the days of John Locke, Jean-Jacques Rousseau and Thomas Jefferson, it does not sit well with the latest findings of the life sciences. The contradiction between free will and contemporary science is the elephant in the laboratory, whom many prefer not to see as they peer into their microscopes and fMRI scanners.1
In the eighteenth century Homo sapiens was like a mysterious black box, whose inner workings were beyond our grasp. Hence when scholars asked why a man drew a knife and stabbed another to death, an acceptable answer said: ‘Because he chose to. He used his free will to choose murder, which is why he is fully responsible for his crime.’ Over the last century, as scientists opened up the Sapiens black box, they discovered there neither soul, nor free will, nor ‘self’ – but only genes, hormones and neurons that obey the same physical and chemical laws governing the rest of reality. Today when scholars ask why a man drew a knife and stabbed someone to death, answering ‘Because he chose to’ doesn’t cut the mustard. Instead, geneticists and brain scientists provide a much more detailed answer: ‘He did it due to such-and-such electrochemical processes in the brain that were shaped by a particular genetic make-up, which in turn reflect ancient evolutionary pressures coupled with chance mutations.’
The electrochemical brain processes that result in murder are either deterministic or random or a combination of both – but they are never free. For example, when a neuron fires an electric charge, this may be either a deterministic reaction to external stimuli, or perhaps the outcome of a random event such as the spontaneous decomposition of a radioactive atom. Neither option leaves any room for free will. Decisions reached through a chain reaction of biochemical events, each determined by a previous event, are certainly not free. Decisions resulting from random subatomic accidents aren’t free either; they are just random. And when random accidents combine with deterministic processes, we get probabilistic outcomes, but this too doesn’t amount to freedom.
Suppose we build a robot whose central processing unit is linked to a radioactive lump of uranium. When choosing between two options – say, press the right button or the left button – the robot counts the number of uranium atoms that decayed during the previous minute. If the number is even – it presses the right button. If the number is odd – the left button. We can never be certain about the actions of such a robot. But nobody would call this contraption ‘free’, and we wouldn’t dream of allowing it to vote in democratic elections or holding it legally responsible for its actions.
To the best of our scientific understanding, determinism and randomness have divided the entire cake between them, leaving not even a crumb for ‘freedom’. The sacred word ‘freedom’ turns out to be, just like ‘soul’, a hollow term empty of any discernible meaning. Free will exists only in the imaginary stories we humans have invented.
The last nail in freedom’s coffin is provided by the theory of evolution. Just as evolution cannot be squared with eternal souls, neither can it swallow the idea of free will. For if humans are free, how could natural selection have shaped them? According to the theory of evolution, all the choices animals make – whether of habitat, food or mates – reflect their genetic code. If, thanks to its fit genes, an animal chooses to eat a nutritious mushroom and copulate with healthy and fertile mates, these genes pass on to the next generation. If, because of unfit genes, an animal opts for poisonous mushrooms and anaemic mates, these genes become extinct. However, if an animal ‘freely’ chooses what to eat and with whom to mate, then natural selection has nothing to work with.
When confronted with such scientific explanations people often brush them aside, pointing out that they feel free and that they act according to their own wishes and decisions. This is true. Humans act according to their desires. If by ‘free will’ we mean the ability to act according to our desires – then yes, humans have free will, and so do chimpanzees, dogs and parrots. When Polly wants a cracker, Polly eats a cracker. But the million-dollar question is not whether parrots and humans can act upon their inner desires – the question is whether they can choose their desires in the first place. Why does Polly want a cracker rather than a cucumber? Why am I so eager to kill my annoying neighbour instead of turning the other cheek? Why do I want to buy the red car rather than the black? Why do I prefer voting for the Conservatives rather than the Labour Party? I don’t choose any of these wishes. I feel a particular wish welling up within me because this is the feeling created by the biochemical processes in my brain. These processes might be deterministic or random, but not free.
You might reply that at least in the case of major decisions such as murdering a neighbour or electing a government, my choice does not reflect a momentary feeling, but a long and reasoned contemplation of weighty arguments. However, there are many possible trains of argument that I could follow, some of which will cause me to vote Conservative, others to vote Labour, and still others to vote UKIP or just stay at home. What makes me board one train of reasoning rather than another? In the Paddington of my brain, I might be compelled to embark on a particular train of reasoning by deterministic processes, or I might hop on at random. But I don’t ‘freely’ choose to think those thoughts that will make me vote Conservative.
These are not just hypotheses or philosophical speculations. Today we can use brain scanners to predict people’s desires and decisions well before they are aware of them. In one such experiment people are placed within a huge brain scanner, holding a switch in each hand. They are asked to press one of the two switches whenever they feel like it. Scientists observing neural activity in the brain can predict which switch the person will press well before the person actually does so, and even before the person is aware of their own intention. Neural events in the brain indicating the person’s decision begin from a few hundred milliseconds to a few seconds before the person is aware of this choice.2
The decision to press either the right or left switch certainly reflects the person’s choice. Yet it isn’t a free choice. In fact, our belief in free will results from faulty logic. When a biochemical chain reaction makes me desire to press the right switch, I feel that I really want to press the right switch. And this is true. I really do want to press it. Yet people erroneously jump to the conclusion that if I want to press it, I choose to want to. This is of course false. I don’t choose my desires. I only feel them, and act accordingly.
People nevertheless go on arguing about free will because even scientists all too often continue to use outdated theological concepts. For centuries, Christian, Muslim and Jewish theologians debated the relations between the soul and the will. They assumed that every human has an inner essence – called the soul – which is my true self. They further maintained that this self possesses various desires, just as it possesses clothes, vehicles and houses. I allegedly choose my desires in the same way I choose my clothes, and my fate is determined according to these choices. If I choose good desires, I end up in heaven; if I choose bad desires, I am destined for hell. The question then arose, how exactly do I choose my desires? Why, for example, did Eve desire to eat the forbidden fruit offered to her by the snake? Was this desire forced upon her? Did this desire just pop into her mind by pure chance? Or did she choose it ‘freely’? If she didn’t choose it freely, why punish her for it?
However, once we accept that there is no soul and that humans have no inner essence called ‘the self’, it no longer makes sense to ask, ‘How does the self choose its desires?’ It’s like asking a bachelor, ‘How does your wife choose her clothes?’ In reality, there is only a stream of consciousness, and desires arise and pass away within this stream, but there is no permanent self that owns the desires, hence it is meaningless to ask whether I choose my desires deterministically, randomly or freely.
It may sound extremely complicated, but it is surprisingly easy to test this idea. Next time a thought pops into your mind, stop and ask yourself: ‘Why did I think this particular thought? Did I decide a minute ago to think this thought, and only then think it? Or did it just arise, without any direction or permission from me? If I am indeed the master of my thoughts and decisions, can I decide not to think about anything at all for the next sixty seconds?’ Try that, and see what happens.
Doubting free will is not just a philosophical exercise. It has practical implications. If organisms indeed lack free will, it implies that we can manipulate and even control their desires using drugs, genetic engineering or direct brain stimulation.
If you want to see philosophy in action, pay a visit to a robo-rat laboratory. A robo-rat is a run-of-the-mill rat with a twist: scientists have implanted electrodes into the sensory and reward areas in the rat’s brain. This enables the scientists to manoeuvre the rat by remote control. After short training sessions, researchers have managed not only to make the rats turn left or right, but also to climb ladders, sniff around garbage piles, and do things that rats normally dislike, such as jumping from extreme heights. Armies and corporations are showing keen interest in the robo-rats, hoping they will prove useful in many tasks and situations. For example, robo-rats might help detect survivors trapped under collapsed buildings, locate bombs and booby traps, and map underground tunnels and caves.
Animal-welfare activists have voiced concern about the suffering such experiments inflict on the rats. Professor Sanjiv Talwar of the State University of New York, one of the leading robo-rat researchers, has dismissed these concerns, arguing that the rats actually enjoy the experiments. After all, explains Talwar, the rats ‘work for pleasure’ and when the electrodes stimulate the reward centres in their brains, ‘the rat feels Nirvana’.3
To the best of our understanding, the rat doesn’t feel that somebody else controls her, and she doesn’t feel that she is being coerced to do anything against her will. When Professor Talwar presses the remote control, the rat wants to move to the left, which is why she moves to the left. When the professor presses another switch, the rat wants to climb a ladder, which is why she climbs the ladder. After all, the rat’s desires are nothing but a pattern of firing neurons. What does it matter whether the neurons are firing because they are stimulated by other neurons or by transplanted electrodes connected to Professor Talwar’s remote control? If you ask the rat about it, she might well tell you, ‘Sure I have free will! Look, I want to turn left – and I turn left. I want to climb a ladder – and I climb a ladder. Doesn’t that prove that I have free will?’
Experiments performed on Homo sapiens indicate that like rats humans too can be manipulated, and that it is possible to create or annihilate even complex feelings such as love, anger, fear and depression by stimulating the right spots in the human brain. The US military has recently initiated experiments on implanting computer chips in people’s brains, hoping to use this method to treat soldiers suffering from post-traumatic stress disorder.4 In Hadassah Hospital in Jerusalem, doctors have pioneered a novel treatment for patients suffering from acute depression. They implant electrodes into the patient’s brain, and wire them to a minuscule computer implanted in the patient’s chest. On receiving a command from the computer, the electrodes transmit weak electric currents that paralyse the brain area responsible for the depression. The treatment does not always succeed, but in some cases patients reported that the feeling of dark emptiness that tormented them throughout their lives disappeared as if by magic.
One patient complained that several months after the operation he had a relapse and was overcome by severe depression. Upon inspection the doctors found the source of the problem: the computer’s battery had run out of power. Once they changed the battery, the depression quickly melted away.5
Due to obvious ethical restrictions researchers implant electrodes into human brains only in exceptional circumstances. Hence most relevant experiments on humans are conducted using non-intrusive helmet-like devices (technically known as ‘transcranial direct-current stimulators’). The helmet is fitted with electrodes that attach to the outside of the scalp. It produces weak electromagnetic fields and directs them towards specific brain areas, thereby stimulating or inhibiting select brain activities.
The American military is experimenting with such helmets in the hope of sharpening the focus and enhancing the performance of soldiers both in training sessions and on the battlefield. The main experiments are conducted by the Human Effectiveness Directorate, which is located at an Ohio air force base. Though the results are far from conclusive, and though the hype around transcranial stimulators currently runs far ahead of actual achievements, several studies have indicated that the method may indeed enhance the cognitive abilities of drone operators, air-traffic controllers, snipers and other personnel whose duties require them to remain highly attentive for extended periods.6
Sally Adee, a journalist for the New Scientist, was allowed to visit a training facility for snipers and test the effects herself. At first she entered a battlefield simulator without wearing the transcranial helmet. Sally describes how fear swept over her as twenty masked men, strapped with suicide bombs and armed with rifles, charge straight towards her. ‘For every one I manage to shoot dead,’ writes Sally, ‘three new assailants pop up from nowhere. I’m clearly not shooting fast enough, and panic and incompetence are making me continually jam my rifle.’ Luckily for her, the assailants were just video images projected on huge screens all around her. Still, she was so disappointed with her poor performance that she felt like ditching the rifle and leaving the simulator.
Then they wired her up to the helmet. She reports feeling nothing unusual, except a slight tingle and a strange metallic taste in her mouth. Yet she began picking off the virtual terrorists one by one, as coolly and methodically as if she were Rambo or Clint Eastwood. ‘As twenty of them run at me brandishing their guns, I calmly line up my rifle, take a moment to breathe deeply, and pick off the closest one, before tranquilly assessing my next target. In what seems like next to no time, I hear a voice call out, “Okay, that’s it.” The lights come up in the simulation room . . . In the sudden quiet amid the bodies around me, I was really expecting more assailants, and I’m a bit disappointed when the team begins to remove my electrodes. I look up and wonder if someone wound the clocks forward. Inexplicably, twenty minutes have just passed. “How many did I get?” I ask the assistant. She looks at me quizzically. “All of them.”’
The experiment changed Sally’s life. In the following days she realised she had been through a ‘near-spiritual experience . . . what defined the experience was not feeling smarter or learning faster: the thing that made the earth drop out from under my feet was that for the first time in my life, everything in my head finally shut up . . . My brain without self-doubt was a revelation. There was suddenly this incredible silence in my head . . . I hope you can sympathise with me when I tell you that the thing I wanted most acutely for the weeks following my experience was to go back and strap on those electrodes. I also started to have a lot of questions. Who was I apart from the angry bitter gnomes that populate my mind and drive me to failure because I’m too scared to try? And where did those voices come from?’7
Some of those voices repeat society’s prejudices, some echo our personal history, and some articulate our genetic legacy. All of them together, says Sally, create an invisible story that shapes our conscious decisions in ways we seldom grasp. What would happen if we could rewrite our inner monologues, or even silence them completely on occasion? 8
As of 2016 transcranial stimulators are still in their infancy, and it is unclear if and when they will become a mature technology. So far they provide enhanced capabilities for only short durations, and Sally Adee’s twenty-minute experience may be quite exceptional (or perhaps even the outcome of the notorious placebo effect). Most published studies of transcranial stimulators are based on very small samples of people operating under special circumstances, and the long-term effects and hazards are completely unknown. However, if the technology does mature, or if some other method is found to manipulate the brain’s electric patterns, what would it do to human societies and to human beings?
People may well manipulate their brain’s electric circuits not just to shoot terrorists more proficiently, but also to achieve more mundane liberal goals. Namely, to study and work more efficiently, immerse themselves in games and hobbies, and be able to focus on what interests them at any particular moment, be it maths or football. However, if and when such manipulations become routine, the supposedly free will of customers will become just another product to purchase. You want to master the piano but whenever it comes time to practice you prefer to watch television? No problem: just put on a helmet, install the right software, and you will be downright aching to play the piano.
You may counter-argue that the ability to silence or enhance the voices in your head will actually strengthen rather than undermine your free will. Presently, you often fail to realise your most cherished and authentic desires due to external distractions. With the help of the attention helmet and similar devices, you could more easily silence the alien voices of parents, priests, spin doctors, advertisers and neighbours, and focus on what you want. However, as we will shortly see, the notion that you have a single self and that you could therefore distinguish your authentic desires from alien voices is just another liberal myth, debunked by the latest scientific research.
Who Are I?
Science undermines not only the liberal belief in free will, but also the belief in individualism. Liberals believe that we have a single and indivisible self. To be an individual means that I am in-dividual. Yes, my body is made up of approximately 37 trillion cells,9 and each day both my body and my mind go through countless permutations and transformations. Yet if I really pay attention and strive to get in touch with myself, I am bound to discover deep inside a single, clear and authentic voice, which is my true self, and which is the source of all meaning and authority in the universe. For liberalism to make sense, I must have one – and only one – true self, for if I had more than one authentic voice, how would I know which voice to heed in the polling station, in the supermarket and in the marriage market?
However, over the last few decades the life sciences have reached the conclusion that this liberal story is pure mythology. The single authentic self is as real as the eternal soul, Santa Claus and the Easter Bunny. If I look really deep within myself, the seeming unity that I take for granted dissolves into a cacophony of conflicting voices, none of which is ‘my true self’. Humans aren’t individuals. They are ‘dividuals’.
The human brain is composed of two hemispheres connected by a thick neural cable. Each hemisphere controls the opposite side of the body. The right hemisphere controls the left side of the body, receives data from the left-hand field of vision and is responsible for moving the left arm and leg – and vice versa. This is why people who have had a stroke in their right hemisphere sometimes ignore the left side of their body (combing hair only on the right side of their head, or eating only the food placed on the right side of their plate).10
There are also emotional and cognitive differences between the two hemispheres, though the division is far from clear-cut. Most cognitive activities involve both hemispheres, but not to the same degree. For example, in most cases the left hemisphere plays a more important role in speech and logical reasoning, whereas the right hemisphere is more dominant in processing spatial information.
Many breakthroughs in understanding the relations between the two hemispheres were based on the study of epilepsy patients. In severe cases of epilepsy, electrical storms begin in one part of the brain but quickly spread to other parts, causing a very acute seizure. During such seizures patients lose control of their bodies, and frequent seizures consequently preclude them from holding jobs or leading normal lives. In the mid-twentieth century, when all other treatments failed, doctors alleviated the problem by cutting the thick neural cable connecting the two hemispheres, so that electrical storms beginning in one hemisphere could not spill over to the other. For brain scientists these patients were a goldmine of astounding data.
Some of the most notable studies on these split-brain patients were conducted by Professor Roger Wolcott Sperry, who won the 1981 Nobel Prize in Physiology and Medicine for his groundbreaking discoveries, and by his student, Professor Michael S. Gazzaniga. One study was conducted on a teenage boy. The boy was asked what he would like to do when he grew up. The boy answered: a draughtsman. This answer was provided by his left brain hemisphere, which plays a crucial part in logical reasoning as well as in speech. Yet the boy had another active speech centre in his right hemisphere, which could not control vocal language but could spell words using Scrabble tiles. The researchers were keen to know what the right hemisphere would say. So they spread Scrabble tiles on a table and wrote on a piece of paper: ‘What would you like to do when you grow up?’ They placed the paper at the edge of the boy’s left visual field. Data from the left visual field is processed in the right hemisphere. Since the right hemisphere could not use vocal language, the boy said nothing. But his left hand began moving rapidly across the table, collecting tiles from here and there, until it spelled out: ‘automobile race’. Spooky.11
Equally eerie behaviour was displayed by patient WJ, a Second World War veteran. WJ’s hands were each controlled by a different hemisphere. Since the two hemispheres were not in touch with one another, it sometimes happened that his right hand would reach out to open a door, and then his left hand would intervene and try to slam the door shut.
In another experiment Gazzaniga and his team flashed a picture of a chicken claw to the left-half brain – the side responsible for speech – and simultaneously flashed a picture of a snowy landscape to the right brain. When asked what he saw, patient PS answered ‘a chicken claw’. Gazzaniga then presented PS with a series of picture cards and asked him to point to the one that best matched what he had seen. The patient’s right hand (controlled by his left brain) pointed to a picture of a chicken, but simultaneously his left hand shot out and pointed to a snow shovel. Gazzaniga then asked PS the obvious question: ‘Why did you point both to the chicken and to the shovel?’ PS replied, ‘Oh, the chicken claw goes with the chicken, and you need a shovel to clean out the chicken shed.’12
What happened here? The left brain, which controls speech, had no data about the snow scene, and therefore did not really know why the left hand pointed to the shovel. So it just invented something credible. After repeating this experiment many times Gazzaniga concluded that the left hemisphere of the brain is the seat not only of our verbal abilities, but also of an internal interpreter that constantly tries to make sense of our life, using partial clues in order to concoct plausible stories.
In yet another experiment the non-verbal right hemisphere was shown a pornographic image. The patient reacted by blushing and giggling. ‘What did you see?’ asked the mischievous researchers. ‘Nothing, just a flash of light,’ said the left hemisphere, and the patient immediately giggled again, covering her mouth with her hand. ‘Why are you laughing then?’ they insisted. The bewildered left-hemisphere interpreter – struggling for some rational explanation – replied that one of the machines in the room looked very funny.13
It’s as if the CIA conducts a drone strike in Pakistan, unbeknown to the US State Department. When a journalist grills State Department officials about it, they concoct some plausible explanation. In reality, the spin doctors don’t have a clue why the strike was ordered, so they just invent something. A similar mechanism is employed by all human beings, not just by split-brain patients. Again and again my own private CIA does things without the approval or knowledge of my State Department, and then my State Department cooks up a story that presents me in the best possible light. Often enough the State Department itself becomes convinced of the pure fantasies it has invented.14
Similar conclusions have been reached by behavioural economists who want to know how people take economic decisions. Or more accurately, who takes these decisions. Who decides to buy a Toyota rather than a Mercedes, to go on holiday to Paris rather than Thailand, and to invest in South Korean treasury bonds rather than in the Shanghai stock exchange? Most experiments have indicated that there is no single self making any of these decisions. Rather, they result from a tug of war between different and often conflicting inner entities.
One groundbreaking experiment was conducted by Daniel Kahneman, who won the 2002 Nobel Prize in Economics. Kahneman asked a group of volunteers to join a three-part experiment. In the ‘short’ part of the experiment, the volunteers inserted one hand into a container filled with water at 14°C for one minute, which is unpleasant, bordering on painful. After sixty seconds they were told to take their hand out. In the ‘long’ part of the experiment, volunteers placed their other hand in a different water container whose temperature was also 14°C. But after sixty seconds hot water was secretly introduced into the container, slightly increasing the temperature to 15°C. Thirty seconds later they were told to pull out their hand. Some volunteers did the ‘short’ part first, while others began with the ‘long’ part. In either case, exactly seven minutes after both parts were over came the third and most important part of the experiment. The volunteers were told they must repeat one of the two parts; and it was up to them to choose which. Fully 80 per cent preferred to repeat the ‘long’ experiment, remembering it as less painful.
This cold-water experiment is so simple, yet its implications shake the core of the liberal world view. It exposes the existence of at least two different selves within us: the experiencing self and the narrating self. The experiencing self is our moment-to-moment consciousness. For the experiencing self, it’s obvious that the ‘long’ part of the cold-water experiment was worse. First you experience water at 14°C for sixty seconds, which is every bit as disagreeable as what you experience in the ‘short’ part, and then you must endure another thirty seconds of water at 15°C, which is marginally less bad, but still far from pleasant. For the experiencing self, it is impossible that adding a slightly unpleasant experience to a very unpleasant experience will make the entire episode more appealing.
However, the experiencing self remembers nothing. It tells no stories and is seldom consulted when it comes to big decisions. Retrieving memories, telling stories and making major decisions are all the monopoly of a very different entity inside us: the narrating self. The narrating self is akin to Gazzaniga’s left-brain interpreter. It is forever busy spinning yarns about the past and making plans for the future. Like every journalist, poet and politician, the narrating self takes many short cuts. It doesn’t narrate everything, and usually weaves the story using only peak moments and end results. The value of the whole experience is determined by averaging peaks with ends. For example, in evaluating the short part of the cold-water experiment the narrating self calculates the average between the worst part (the water was very cold) and the last moment (the water was still very cold) and concludes that ‘the water was very cold’. The narrating self does the same thing with the long part of the experiment. It finds the average between the worst part (the water was very cold) and the last moment (the water was not quite so cold) and concludes that ‘the water was somewhat warmer’. Crucially, the narrating self is duration-blind, giving no importance to the differing lengths of the two parts. So when it has a choice between the two, it prefers to repeat the long part, the one in which ‘the water was somewhat warmer’.
Every time the narrating self evaluates our experiences, it discounts their duration and adopts the ‘peak-end rule’ – it remembers only the peak moment and the end moment, and assesses the whole experience according to their average. This has far-reaching impact on all our practical decisions. Kahneman began investigating the experiencing self and the narrating self in the early 1990s when, together with Donald Redelmeier of the University of Toronto, he studied colonoscopy patients. In colonoscopy tests a tiny camera is inserted into the intestines through the anus in order to diagnose various bowel diseases. It is not a pleasant experience. Doctors wanted to know how to perform this procedure in the least painful way. Should they hasten the colonoscopy and cause patients more distress for a shorter duration, or should they work more slowly and carefully?
To answer this query, Kahneman and Redelmeier asked 154 patients to report their pain level at one-minute intervals during the colonoscopy. They used a scale of 0 to 10, where 0 meant no pain at all and 10 meant intolerable pain. After the colonoscopy was over patients were asked to rank the test’s ‘overall pain level’, also on a scale of 0 to 10. We might have expected the overall ranking to reflect the accumulation of minute-by-minute reports, that is the longer the colonoscopy lasted, and the more pain the patient experienced, the higher the overall pain level. But the actual results were different.
Just as in the cold-water experiment, the overall pain level neglected duration and instead reflected only the peak-end rule. One colonoscopy lasted eight minutes, at the worst moment the patient reported a level 8 pain, and in the last minute he reported a level 7 pain. After the test was over this patient ranked his overall pain level at 7.5. Another colonoscopy lasted twenty-four minutes. This time too peak pain was at level 8, but in the very last minute of the test, the patient reported a level 1 pain. This patient ranked his overall pain level at only 4.5. The fact that his colonoscopy lasted three times longer, and that he consequently suffered far more pain on aggregate, did not affect his memory at all. The narrating self doesn’t aggregate experiences – it averages them.
So what do the patients prefer: to have a short and sharp colonoscopy, or a long and careful one? There isn’t a single answer to this question, because the patient has at least two different selves and they have different interests. If you ask the experiencing self, it would probably choose a short colonoscopy. But if you ask the narrating self, it would prefer a long colonoscopy because it remembers only the average between the worst moment and the last moment. Indeed, from the viewpoint of the narrating self, the doctor should add a few completely superfluous minutes of dull aches at the very end of the test, because it would make the entire memory far less traumatic.15
Paediatricians know this trick well. So do veterinarians. Many keep in their clinics jars full of treats, and hand a few to the kids (or dogs) after giving them a painful injection or an unpleasant medical examination. When the narrating self remembers the visit to the doctor, ten seconds of pleasure at the end of the visit will erase many minutes of anxiety and pain.
Evolution discovered this trick aeons before the paediatricians. Given the unbearable torments that many women undergo during childbirth, one might think that after going through it once no sane woman would ever agree to do so again. However, at the end of labour and in the following days the hormonal system secretes cortisol and beta-endorphins, which reduce the pain and create a feeling of relief and sometimes even of elation. Moreover, the growing love towards the baby and the acclaim from friends, family members, religious dogmas and nationalist propaganda, conspire to transform childbirth from a trauma into a positive memory.
40. An iconic image of the Virgin Mary holding baby Jesus. In most cultures, childbirth is portrayed as a wonderful experience rather than as a trauma.
40. Virgin and Child, Sassoferrato, Il (Giovanni Battista Salvi) (1609–85), Musee Bonnat, Bayonne, France © Bridgeman Images.
One study conducted at the Rabin Medical Center in Tel Aviv demonstrated that the memory of labour reflected mainly the peak and end points, while the overall duration had almost no impact at all.16 In another research project, 2,428 Swedish women were asked to recount their memories of labour two months after giving birth. Ninety per cent reported that the experience was either positive or very positive. They didn’t necessarily forget the pain – 28.5 per cent described it as the worst pain imaginable – yet it did not prevent them from evaluating the experience as positive. The narrating self goes over our experiences with a sharp pair of scissors and a thick black marker. It censors at least some moments of horror, and files in the archive a story with a happy ending.17
Most of our critical life choices – of partners, careers, residences and holidays – are taken by our narrating self. Suppose you can choose between two potential holidays. You can go to Jamestown, Virginia, and visit the historic colonial town where the first English settlement on mainland North America was founded in 1607. Alternatively, you can realise your number one dream vacation, whether it is trekking in Alaska, sunbathing in Florida or indulging in an unbridled bacchanalia of sex, drugs and gambling in Las Vegas. But there is a caveat: if you choose your dream vacation, then just before you board the plane home, you must take a pill that will obliterate all your memories of that vacation. What happened in Vegas will forever remain in Vegas. Which holiday would you choose? Most people would opt for colonial Jamestown, because most people give their credit card to the narrating self, which cares only about stories and has zero interest in even the most mind-blowing experiences if it cannot remember them.
Truth be told, the experiencing self and the narrating self are not completely separate entities but are closely intertwined. The narrating self uses our experiences as important (but not exclusive) raw materials for its stories. These stories, in turn, shape what the experiencing self actually feels. We experience hunger differently when we fast during Ramadan, when we fast in preparation for a medical examination, and when we don’t eat because we have no money. The different meanings ascribed to our hunger by the narrating self create very different actual experiences.
Furthermore, the experiencing self is often strong enough to sabotage the best-laid plans of the narrating self. I might, for instance, make a New Year’s resolution to start a diet and go to the gym every day. Such grand decisions are the monopoly of the narrating self. But the following week when it’s gym time, the experiencing self takes over. I don’t feel like going to the gym, and instead I order pizza, sit on the sofa and turn on the TV.
Nevertheless, most of us identify with our narrating self. When we say ‘I’, we mean the story in our head, not the onrushing stream of experiences we undergo. We identify with the inner system that takes the crazy chaos of life and spins out of it seemingly logical and consistent yarns. It doesn’t matter that the plot is full of lies and lacunas, and is rewritten again and again, so that today’s story flatly contradicts yesterday’s. The important thing is that we always retain the feeling that we have a single unchanging identity from birth to death (and perhaps even beyond). This gives rise to the questionable liberal belief that I am an individual, and that I possess a clear and consistent inner voice that provides meaning to the entire universe.18
The Meaning of Life
The narrating self is the star of Jorge Luis Borges’s story ‘A Problem’.19 The story concerns Don Quixote, the eponymous hero of Miguel Cervantes’s famous novel. Don Quixote creates for himself an imaginary world in which he is a legendary champion going forth to fight giants and save Lady Dulcinea del Toboso. In reality Don Quixote is Alonso Quixano, an elderly country gentleman; the noble Dulcinea is an uncouth farm girl from a nearby village; and the giants are windmills. What would happen, wonders Borges, if due to his belief in these fantasies, Don Quixote attacks and kills a real person? Borges asks a fundamental question about the human condition: what happens when the yarns spun by our narrating self cause grievous harm to ourselves or those around us? There are three main possibilities, says Borges.
One option is that nothing much happens. Don Quixote will not be bothered at all by killing a real man. His delusions are so overpowering that he will not be able to recognise the difference between committing actual murder and dueling with the imaginary windmill giants. Another option is that once he takes a person’s life, Don Quixote will be so horrified that he will be shaken out of his delusions. This is akin to a young recruit who goes to war believing that it is good to die for one’s country, only to end up completely disillusioned by the realities of warfare.
But there is a third option, much more complex and profound. As long as he fought imaginary giants, Don Quixote was just play-acting. However once he actually kills someone, he will cling to his fantasies for all he is worth, because only they give meaning to his tragic misdeed. Paradoxically, the more sacrifices we make for an imaginary story, the more tenaciously we hold on to it, because we desperately want to give meaning to these sacrifices and to the suffering we have caused.
In politics this is known as the ‘Our Boys Didn’t Die in Vain’ syndrome. In 1915 Italy entered the First World War on the side of the Entente powers. Italy’s declared aim was to ‘liberate’ Trento and Trieste – two ‘Italian’ territories that the Austro-Hungarian Empire held ‘unjustly’. Italian politicians gave fiery speeches in parliament, vowing historical redress and promising a return to the glories of ancient Rome. Hundreds of thousands of Italian recruits went to the front shouting, ‘For Trento and Trieste!’ They thought it would be a walkover.
It was anything but. The Austro-Hungarian army held a strong defensive line along the Isonzo River. The Italians hurled themselves against it in eleven gory battles, gaining a few miles at most and never securing a breakthrough. In the first battle they lost 15,000 men. In the second battle they lost 40,000 men. In the third battle they lost 60,000. So it continued for more than two dreadful years until the eleventh engagement. Then the Austrians finally counter-attacked, and in the twelfth battle, known as the battle of Caporreto, they soundly defeated the Italians and pushed them back almost to the gates of Venice. The glorious adventure became a bloodbath. By the end of the war almost 700,000 Italian soldiers were killed and more than a million were wounded.20
After losing the first Isonzo battle Italian politicians had two choices. They could have admitted their mistake and offered to sign a peace treaty. Austria–Hungary had no claims against Italy and would have been delighted to make a peace treaty because it was busy fighting for its survival against the much stronger Russians. Yet how could the politicians go to the parents, wives and children of 15,000 dead Italian soldiers and tell them: ‘Sorry, there has been a mistake. We hope you won’t take it too hard, but your Giovanni died in vain, and so did your Marco.’ Alternatively they could say: ‘Giovanni and Marco were heroes! They died so that Trieste would be Italian, and we will make sure they didn’t die in vain. We will go on fighting until victory is ours!’ Not surprisingly, the politicians preferred the second option. So they fought a second battle, and lost another 40,000 men. The politicians again decided it would be best to keep on fighting, because ‘our boys didn’t die in vain’.
41. A few of the victims of the Isonzo battles. Was their sacrifice in vain?
41. © Bettmann/Corbis.
Yet you cannot blame only the politicians. The masses also kept supporting the war. And when after the war Italy did not get all the territories it demanded, Italian democracy placed at its head Benito Mussolini and his fascists, who promised they would obtain a proper compensation for all the sacrifices the Italians had made. While it’s difficult for a politician to tell parents that their son died for no good reason, it is far more painful for parents to say this to themselves – and it is even harder for the victims. A crippled soldier who lost his legs would rather tell himself, ‘I sacrificed myself for the glory of the eternal Italian nation!’ than ‘I lost my legs because I was stupid enough to believe self-serving politicians.’ It is much easier to live with the fantasy, because the fantasy gives meaning to the suffering.
Priests discovered this principle thousands of years ago. It underlies numerous religious ceremonies and commandments. If you want to make people believe in imaginary entities such as gods and nations, you should make them sacrifice something valuable. The more painful the sacrifice, the more convinced they will be of the existence of the imaginary recipient. A poor peasant sacrificing a valuable bull to Jupiter will become convinced that Jupiter really exists, otherwise how can he excuse his stupidity? The peasant will sacrifice another bull, and another, and another, just so he won’t have to admit that all the previous bulls were wasted. For exactly the same reason, if I have sacrificed a child to the glory of the Italian nation or my legs to the communist revolution, that’s usually enough to turn me into a zealous Italian nationalist or an enthusiastic communist. For if Italian national myths or communist propaganda are a lie, then I will be forced to concede that my child’s death or my own paralysis have been completely pointless. Few people have the stomach to admit such a thing.
The same logic is at work in the economic sphere too. In 1999 the government of Scotland decided to erect a new parliament building. According to the original plan the construction was expected to take two years and cost £40 million. In fact it took five years and cost £400 million. Every time the contractors encountered unforeseen difficulties and expenses, they went to the Scottish government and asked for more time and money. Every time this happened the government said to itself: ‘Well, we’ve already sunk tens of millions into this and we’ll be completely discredited if we stop now and end up with a partially built skeleton. Let’s authorise another £40 million.’ Several months later the same thing happened again, by which time the pressure to avoid ending up with an unfinished building was even greater. And a few months after that the story repeated itself, and so on until the actual cost was ten times the original estimate.
Not only governments fall into this trap. Business corporations often sink millions into failed enterprises, while private individuals cling to dysfunctional marriages and dead-end jobs. Our narrating self would much prefer to continue suffering in the future, just so it won’t have to admit that our past suffering was devoid of all meaning. Eventually, if we want to come clean about past mistakes, our narrating self must invent some twist in the plot that will infuse these mistakes with meaning. For example, a pacifist war veteran may tell himself, ‘Yes, I’ve lost my legs because of a mistake. But thanks to this mistake I understand that war is hell, and from now onwards I will dedicate my life to fight for peace. So my injury did have some positive meaning after all: it taught me to value peace.’
42. The Scottish Parliament building. Our sterling did not die in vain.
42. © Jeremy Sutton-Hibbert/Getty Images.
We see then that the self too is an imaginary story, just like nations, gods and money. Each of us has a sophisticated system that throws away most of our experiences, keeps only a few choice samples, mixes them up with bits from movies we’ve seen, novels we’ve read, speeches we’ve heard, and daydreams we’ve savoured, and out of all that jumble it weaves a seemingly coherent story about who I am, where I came from and where I am going. This story tells me what to love, whom to hate and what to do with myself. This story may even cause me to sacrifice my life, if that’s what the plot requires. We all have our genre. Some people live a tragedy, others inhabit a never-ending religious drama, some approach life as if it were an action film, and not a few act as if in a comedy. But in the end, they are all just stories.
What, then, is the meaning of life? Liberalism maintains that we shouldn’t expect some external entity to provide us a ready-made meaning. Rather, each individual voter, customer and viewer ought to use his or her free will in order to create meaning, not just for his or her life but for the entire universe.
The life sciences, however, undermine liberalism, arguing that the free individual is just a fictional tale concocted by an assembly of biochemical algorithms. Every moment the biochemical mechanisms of the brain create a flash of experience, which immediately disappears. Then more flashes appear and fade, appear and fade, in quick succession. These momentary experiences do not add up to any enduring essence. The narrating self tries to impose order on this chaos by spinning a never-ending story, in which every such experience has its place, and hence every experience has some lasting meaning. But, as convincing and tempting as it may be, this story is a fiction. Medieval crusaders believed that God and heaven provided their lives with meaning; modern liberals believe that individual free choices provide life with meaning. They are all equally delusional.
Doubts about the existence of free will and individuals are nothing new, of course. More than 2,000 years ago thinkers in India, China and Greece argued that ‘the individual self is an illusion’. Yet such doubts don’t really change history much unless they have a practical impact on economics, politics and day-to-day life. Humans are masters of cognitive dissonance, and we allow ourselves to believe one thing in the laboratory and an altogether different thing in the courthouse or in parliament. Just as Christianity didn’t disappear the day Darwin published On the Origin of Species, so liberalism won’t vanish just because scientists have reached the conclusion that there are no free individuals.
Indeed, even Richard Dawkins, Steven Pinker and the other champions of the new scientific world view refuse to abandon liberalism. After dedicating hundreds of erudite pages to deconstructing the self and the freedom of will, they perform breathtaking intellectual somersaults that miraculously land them back in the eighteenth century, as if all the amazing discoveries of evolutionary biology and brain science have absolutely no bearing on the ethical and political ideas of Locke, Rousseau and Jefferson.
However, once the heretical scientific insights are translated into everyday technology, routine activities and economic structures, it will become increasingly difficult to sustain this double-game, and we – or our heirs – will probably require a brand-new package of religious beliefs and political institutions. At the beginning of the third millennium liberalism is threatened not by the philosophical idea that ‘there are no free individuals’, but rather by concrete technologies. We are about to face a flood of extremely useful devices, tools and structures that make no allowance for the free will of individual humans. Will democracy, the free market and human rights survive this flood?
The preceding pages took us on a brief tour of recent scientific discoveries that undermine the liberal philosophy. It’s time to examine the practical implications of these discoveries. Liberals uphold free markets and democratic elections because they believe that every human is a uniquely valuable individual, whose free choices are the ultimate source of authority. In the twenty-first century three practical developments might make this belief obsolete:
1. Humans will lose their economic and military usefulness, hence the economic and political system will stop attaching much value to them.
2. The system will continue to find value in humans collectively, but not in unique individuals.
3. The system will still find value in some unique individuals, but these will constitute a new elite of upgraded superhumans rather than the mass of the population.
Let’s examine all three threats in detail. The first – that technological developments will make humans economically and militarily useless – will not prove that liberalism is wrong on a philosophical level, but in practice it is hard to see how democracy, free markets and other liberal institutions can survive such a blow. After all, liberalism did not become the dominant ideology simply because its philosophical arguments were the most valid. Rather, liberalism succeeded because there was abundant political, economic and military sense in ascribing value to every human being. On the mass battlefields of modern industrial wars and in the mass production lines of modern industrial economies, every human counted. There was value to every pair of hands that could hold a rifle or pull a lever.
In the spring of 1793 the royal houses of Europe sent their armies to strangle the French Revolution in its cradle. The firebrands in Paris reacted by proclaiming the levée en masse and unleashing the first total war. On 23 August, the National Convention decreed that ‘From this moment until such time as its enemies shall have been driven from the soil of the Republic, all Frenchmen are in permanent requisition for the services of the armies. The young men shall fight; the married men shall forge arms and transport provisions; the women shall make tents and clothes and shall serve in the hospitals; the children shall turn old lint into linen; and the old men shall betake themselves to the public squares in order to arouse the courage of the warriors and preach hatred of kings and the unity of the Republic.’1
This decree sheds interesting light on the French Revolution’s most famous document – The Declaration of the Rights of Man and of the Citizen – which recognised that all citizens have equal value and equal political rights. Is it a coincidence that universal rights were proclaimed at the precise historical juncture when universal conscription was decreed? Though scholars may quibble about the exact relations between them, in the following two centuries a common argument in defence of democracy explained that giving citizens political rights is good, because the soldiers and workers of democratic countries perform better than those of dictatorships. Allegedly, granting political rights to people increases their motivation and their initiative, which is useful both on the battlefield and in the factory.
Thus Charles W. Eliot, president of Harvard from 1869 to 1909, wrote on 5 August 1917 in the New York Times that ‘democratic armies fight better than armies aristocratically organised and autocratically governed’ and that ‘the armies of nations in which the mass of the people determine legislation, elect their public servants, and settle questions of peace and war, fight better than the armies of an autocrat who rules by right of birth and by commission from the Almighty’.2
A similar rationale favoured the enfranchisement of women in the wake of the First World War. Realising the vital role of women in total industrial wars, countries saw the need to give them political rights in peacetime. Thus in 1918 President Woodrow Wilson became a supporter of women’s suffrage, explaining to the US Senate that the First World War ‘could not have been fought, either by the other nations engaged or by America, if it had not been for the services of women – services rendered in every sphere – not only in the fields of effort in which we have been accustomed to see them work, but wherever men have worked and upon the very skirts and edges of the battle itself. We shall not only be distrusted but shall deserve to be distrusted if we do not enfranchise them with the fullest possible enfranchisement.’3
However, in the twenty-first century the majority of both men and women might lose their military and economic value. Gone is the mass conscription of the two world wars. The most advanced armies of the twenty-first century rely far more on cutting-edge technology. Instead of limitless cannon fodder, countries now need only small numbers of highly trained soldiers, even smaller numbers of special forces super-warriors and a handful of experts who know how to produce and use sophisticated technology. Hi-tech forces ‘manned’ by pilotless drones and cyber-worms are replacing the mass armies of the twentieth century, and generals delegate more and more critical decisions to algorithms.
Aside from their unpredictability and their susceptibility to fear, hunger and fatigue, flesh-and-blood soldiers think and move on an increasingly irrelevant timescale. From the days of Nebuchadnezzar to those of Saddam Hussein, despite myriad technological improvements, war was waged on an organic timetable. Discussions lasted for hours, battles took days, and wars dragged on for years. Cyberwars, however, may last just a few minutes. When a lieutenant on shift at cyber-command notices something odd is going on, she picks up the phone to call her superior, who immediately alerts the White House. Alas, by the time the president reaches for the red handset, the war has already been lost. Within seconds a sufficiently sophisticated cyber strike might shut down the US power grid, wreck US flight control centres, cause numerous industrial accidents in nuclear plants and chemical installations, disrupt the police, army and intelligence communication networks – and wipe out financial records so that trillions of dollars simply vanish without a trace and nobody knows who owns what. The only thing curbing public hysteria is that, with the Internet, television and radio down, people will not be aware of the full magnitude of the disaster.
On a smaller scale, suppose two drones fight each other in the air. One drone cannot open fire without first receiving the go-ahead from a human operator in some distant bunker. The other is fully autonomous. Which drone do you think will prevail? If in 2093 the decrepit European Union sends its drones and cyborgs to snuff out a new French Revolution, the Paris Commune might press into service every available hacker, computer and smartphone, but it will have little use for most humans, except perhaps as human shields. It is telling that already today in many asymmetrical conflicts the majority of citizens are reduced to serving as shields for advanced armaments.
43. Left: Soldiers in action at the Battle of the Somme, 1916. Right: A pilotless drone.
43. Left: © Fototeca Gilardi/Getty Images. Right: © alxpin/Getty Images.
Even if you care more about justice than victory, you should probably opt to replace your soldiers and pilots with autonomous robots and drones. Human soldiers murder, rape and pillage, and even when they try to behave themselves, they all too often kill civilians by mistake. Computers programmed with ethical algorithms could far more easily conform to the latest rulings of the international criminal court.
In the economic sphere too the ability to hold a hammer or press a button is becoming less valuable than before, which endangers the critical alliance between liberalism and capitalism. In the twentieth century liberals explained that we don’t have to choose between ethics and economics. Protecting human rights and liberties was both a moral imperative and the key to economic growth. Britain, France and the United States allegedly prospered because they liberalised their economies and societies, and if Turkey, Brazil or China wanted to become equally prosperous, they had to do the same. In many if not most cases it was the economic rather than the moral argument that convinced tyrants and juntas to liberalise.
In the twenty-first century liberalism will have a much harder time selling itself. As the masses lose their economic importance, will the moral argument alone be enough to protect human rights and liberties? Will elites and governments go on valuing every human being even when it pays no economic dividends?
In the past there were many things only humans could do. But now robots and computers are catching up, and may soon outperform humans in most tasks. True, computers function very differently from humans, and it seems unlikely that computers will become humanlike any time soon. In particular, it doesn’t seem that computers are about to gain consciousness and start experiencing emotions and sensations. Over the past half century there has been an immense advance in computer intelligence, but there has been exactly zero advance in computer consciousness. As far as we know, computers in 2016 are no more conscious than their prototypes in the 1950s. However, we are on the brink of a momentous revolution. Humans are in danger of losing their economic value because intelligence is decoupling from consciousness.
Until today high intelligence always went hand in hand with a developed consciousness. Only conscious beings could perform tasks that required a lot of intelligence, such as playing chess, driving cars, diagnosing diseases or identifying terrorists. However, we are now developing new types of non-conscious intelligence that can perform such tasks far better than humans. For all these tasks are based on pattern recognition, and non-conscious algorithms may soon excel human consciousness in recognising patterns.
Science fiction movies generally assume that in order to match and surpass human intelligence, computers will have to develop consciousness. But real science tells a different story. There might be several alternative ways leading to super-intelligence, only some of which pass through the straits of consciousness. For millions of years organic evolution has been slowly sailing along the conscious route. The evolution of inorganic computers may completely bypass these narrow straits, charting a different and much quicker course to super-intelligence.
This raises a novel question: which of the two is really important, intelligence or consciousness? As long as they went hand in hand, debating their relative value was just an amusing pastime for philosophers. But in the twenty-first century this is becoming an urgent political and economic issue. And it is sobering to realise that, at least for armies and corporations, the answer is straightforward: intelligence is mandatory but consciousness is optional.
Armies and corporations cannot function without intelligent agents, but they don’t need consciousness and subjective experiences. The conscious experiences of a flesh-and-blood taxi driver are infinitely richer than those of a self-driving car, which feels absolutely nothing. The taxi driver can enjoy music while navigating the busy streets of Seoul. His mind may expand in awe as he looks up at the stars and contemplates the mysteries of the universe. His eyes may fill with tears of joy when he sees his baby girl taking her very first step. But the system doesn’t need all that from a taxi driver. All it really wants is to bring passengers from point A to point B as quickly, safely and cheaply as possible. And the autonomous car will soon be able to do that far better than a human driver, even though it cannot enjoy music or be awestruck by the magic of existence.
We should remind ourselves of the fate of horses during the Industrial Revolution. An ordinary farm horse can smell, love, recognize faces, jump over fences and do a thousand other things far better than a Model T Ford or a million-dollar Lamborghini. But cars nevertheless replaced horses because they were superior in the handful of tasks that the system really needed. Taxi drivers are highly likely to go the way of horses.
Indeed, if we forbid humans to drive not only taxis but vehicles altogether, and give computer algorithms a monopoly over traffic, we can then connect all vehicles to a single network, thereby rendering car accidents far less likely. In August 2015, one of Google’s experimental self-driving cars had an accident. As it approached a crossing and detected pedestrians wishing to cross, it applied its brakes. A moment later it was hit from behind by a sedan whose careless human driver was perhaps contemplating the mysteries of the universe instead of watching the road. This could not have happened if both vehicles had been guided by interlinked computers. The controlling algorithm would have known the position and intentions of every vehicle on the road, and would not have allowed two of its marionettes to collide. Such a system would save lots of time, money and human lives – but would also eliminate the human experience of driving a car and tens of millions of human jobs.4
Some economists predict that sooner or later, unenhanced humans will be completely useless. Robots and 3D printers are already replacing workers in manual jobs such as manufacturing shirts, and highly intelligent algorithms will do the same to white-collar occupations. Bank clerks and travel agents, who a short time ago seemed completely secure from automation, have become endangered species. How many travel agents do we need when we can use our smartphones to buy plane tickets from an algorithm?
Stock-exchange traders are also in danger. Most financial trading today is already being managed by computer algorithms that can process in a second more data than a human can in a year and can react to the data much faster than a human can blink. On 23 April 2013, Syrian hackers broke into Associated Press’s official Twitter account. At 13:07 they tweeted that the White House had been attacked and President Obama was hurt. Trade algorithms that constantly monitor newsfeeds reacted in no time and began selling stocks like mad. The Dow Jones went into free fall and within sixty seconds lost 150 points, equivalent to a loss of $136 billion! At 13:10 Associated Press clarified that the tweet was a hoax. The algorithms reversed gear and by 13:13 the Dow Jones had recuperated almost all the losses.
Three years earlier, on 6 May 2010, the New York stock exchange underwent an even sharper shock. Within five minutes – from 14:42 to 14:47 – the Dow Jones dropped by 1,000 points, wiping out $1 trillion. It then bounced back, returning to its pre-crash level in a little more than three minutes. That’s what happens when super-fast computer programs are in charge of our money. Experts have been trying ever since to understand what happened in this so-called ‘Flash Crash’. They know algorithms were to blame, but are still not sure exactly what went wrong. Some traders in the USA have already filed lawsuits against algorithmic trading, arguing that it unfairly discriminates against human beings who simply cannot react fast enough to compete. Quibbling whether this really constitutes a violation of rights might provide lots of work and lots of fees for lawyers.5
And these lawyers won’t necessarily be human. Movies and TV series give the impression that lawyers spend their days in court shouting ‘Objection!’ and making impassioned speeches. Yet most run-of-the-mill lawyers devote their time to perusing endless files, looking for precedents, loopholes and tiny pieces of potentially relevant evidence. Some are busy trying to figure out what happened on the night John Doe was murdered, or formulating a gargantuan business contract that will protect their client against every conceivable eventuality. What will be the fate of all these lawyers once sophisticated search algorithms can locate more precedents in a day than a human can in a lifetime, and once brain scans can reveal lies and deceptions at the press of a button? Even highly experienced lawyers and detectives cannot easily spot duplicity merely by observing people’s facial expressions and tone of voice. However, lying involves different brain areas from those used in telling the truth. We’re not there yet, but it is conceivable that in the not too distant future fMRI scanners could function as almost infallible truth machines. Where will that leave millions of lawyers, judges, cops and detectives? They might consider returning to school to learn a new profession.6
When they enter the classroom, however, they may well discover that the algorithms have got there first. Companies such as Mindojo are developing interactive algorithms that will not only teach me maths, physics and history, but will also simultaneously study me and get to know exactly who I am. Digital teachers will closely monitor every answer I give, and how long it took me to give it. Over time, they will discern my unique weaknesses as well as my strengths and will identify what gets me excited, and what makes my eyelids droop. They could teach me thermodynamics or geometry in a way that suits my personality type, even if that particular method doesn’t suit 99 per cent of the other pupils. And these digital teachers will never lose their patience, never shout at me, and never go on strike. It remains unclear, however, why on earth I would need to know thermodynamics or geometry in a world containing such intelligent computer programs.7
Even doctors are fair game for the algorithms. The first and foremost task of most doctors is to diagnose diseases correctly, and then suggest the best available treatment. If I arrive at the clinic complaining of fever and diarrhoea, I might be suffering from food poisoning. Then again, the same symptoms might result from a stomach virus, cholera, dysentery, malaria, cancer or some unknown new disease. My physician has only a few minutes to make a correct diagnosis, because that is all the time my health insurance pays for. This allows for no more than a few questions and perhaps a quick medical examination. The doctor then cross-references this meagre information with my medical history, and with the vast world of human maladies. Alas, not even the most diligent doctor can remember all my previous ailments and check-ups. Similarly, no doctor can be familiar with every illness and drug, or read every new article published in every medical journal. To top it all, the doctor is sometimes tired or hungry or perhaps even sick, which affects her judgement. No wonder that doctors sometimes err in their diagnoses or recommend a less-than-optimal treatment.
Now consider IBM’s famous Watson – an artificial intelligence system that won the Jeopardy! television game show in 2011, beating human former champions. Watson is currently groomed to do more serious work, particularly in diagnosing diseases. An AI such as Watson has enormous potential advantages over human doctors. Firstly, an AI can hold in its databanks information about every known illness and medicine in history. It can then update these databanks daily, not only with the findings of new researches, but also with medical statistics gathered from every linked-in clinic and hospital in the world.
44. IBM’s Watson defeating its two humans opponents in Jeopardy! in 2011.
44. © Sony Pictures Television.
Secondly, Watson will be intimately familiar not only with my entire genome and my day-to-day medical history, but also with the genomes and medical histories of my parents, siblings, cousins, neighbours and friends. Watson will know instantly whether I visited a tropical country recently, whether I have recurring stomach infections, whether there have been cases of intestinal cancer in my family or whether people all over town are complaining this morning about diarrhoea.
Thirdly, Watson will never be tired, hungry or sick, and will have all the time in the world for me. I could sit comfortably on my sofa at home and answer hundreds of questions, telling Watson exactly how I feel. This is good news for most patients (except perhaps hypochondriacs). But if you enter medical school today in the expectation of still being a family doctor in twenty years, maybe you should think again. With such a Watson around, there is not much need for Sherlocks.
This threat hovers over the heads not only of general practitioners, but also of experts. Indeed, it might prove easier to replace doctors specialising in relatively narrow fields such as cancer diagnosis. In a recent experiment a computer algorithm correctly diagnosed 90 per cent of lung cancer cases presented to it, while human doctors had a success rate of only 50 per cent.8 In fact, the future is already here. CT scans and mammography exams are routinely checked by specialised algorithms, which provide doctors with a second opinion, and sometimes detect tumours that the doctors missed.9
A host of tough technical problems still prevent Watson and its ilk from replacing most doctors tomorrow morning. Yet these technical problems – however difficult – need only be solved once. The training of a human doctor is a complicated and expensive process that lasts years. When the process is complete, after a decade or so of studies and internships, all you get is one doctor. If you want two doctors, you have to repeat the entire process from scratch. In contrast, if and when you solve the technical problems hampering Watson, you will get not one, but an infinite number of doctors, available 24/7 in every corner of the world. So even if it costs $100 billion to make it work, in the long run it would be much cheaper than training human doctors.
Of course not all human doctors will disappear. Tasks that require a greater level of creativity than run-of-the-mill diagnosis will remain in human hands for the foreseeable future. Just as twenty-first-century armies are increasing the size of their elite special forces, so future healthcare services might offer many more openings to the medical equivalents of army rangers and navy SEALs. However, just as armies no longer need millions of GIs, so future healthcare services will not need millions of GPs.
What’s true of doctors is doubly true of pharmacists. In 2011 a pharmacy opened in San Francisco manned by a single robot. When a human comes to the pharmacy, within seconds the robot receives all of the customer’s prescriptions, as well as detailed information about any other medicines she takes, and her suspected allergies. The robot ensures that the new medications don’t interact adversely with any other medicine or allergy, and then dispenses the required drug to the customer. In its first year of operation the robotic pharmacist provided 2 million prescriptions, without making a single mistake. On average, flesh-and-blood pharmacists err in 1.7 per cent of prescriptions. In the United States alone this amounts to more than 50 million mistaken prescriptions every year!10
Some people argue that even if an algorithm could outperform doctors and pharmacists in the technical aspects of their professions, it could never replace their human touch. If your CT indicates you have cancer, would you prefer to receive the news from a cold machine, or from a human doctor attentive to your emotional state? Well, how about receiving the news from an attentive machine that tailors its words to your feelings and personality type? Remember that organisms are algorithms, and Watson could detect your emotional state with the same accuracy that it detects your tumours.
A human doctor recognises your emotional state by analysing external signals such as your facial expression and your tone of voice. Watson could not only analyse such external signals more accurately than a human doctor, but it could simultaneously analyse numerous internal indicators that are normally hidden from our eyes and ears. By monitoring your blood pressure, brain activities and countless other biometric data Watson could know exactly how you feel. Thanks to statistics garnered from millions of previous social encounters, Watson could then tell you precisely what you need to hear in just the right tone of voice. For all their vaunted emotional intelligence, human beings are often overwhelmed by their own emotions and react in counterproductive ways. For example, encountering an angry person they start shouting, and listening to a fearful person they let their own anxieties run wild. Watson would never succumb to such temptations. Having no emotions of its own, it would always offer the most appropriate response to your emotional state.
This idea has already partly been implemented by some customer-services departments, such as those pioneered by the Chicago-based Mattersight Corporation. Mattersight publishes its wares with the following blurb: ‘Have you ever spoken with someone and felt as though you just clicked? The magical feeling you get is the result of a personality connection. Mattersight creates that feeling every day, in call centers around the world.’11 When you phone customer service with a request or complaint, it usually takes a few seconds to route your call to a representative. In Mattersight systems your call is routed by a clever algorithm. You first state your reason for calling. The algorithm listens to your problem, analyses the words you have used and your tone of voice, and deduces not only your present emotional state but also your personality type – whether you are introverted, extroverted, rebellious or dependent. Based on this information, the algorithm forwards your call to the representative who best matches your mood and personality. The algorithm knows whether you need an empathetic person to listen patiently to your complaints, or a no-nonsense rational type who will give you the quickest technical solution. A good match means both happier customers and less time and money wasted by the customer-service department.12
The Useless Class
The most important question in twenty-first-century economics may well be what to do with all the superfluous people. What will conscious humans do, once we have highly intelligent non-conscious algorithms that can do almost everything better?
Throughout history the job market has been divided into three main sectors: agriculture, industry and services. Until about 1800, the vast majority of people worked in agriculture, and only a small minority worked in industry and services. During the Industrial Revolution people in developed countries left the fields and flocks. Most began working in industry, but growing numbers also took up jobs in the services sector. In recent decades developed countries underwent another revolution; as industrial jobs vanished the services sector expanded. In 2010 only 2 per cent of Americans worked in agriculture and 20 per cent worked in industry, while 78 per cent worked as teachers, doctors, webpage designers and so forth. When mindless algorithms are able to teach, diagnose and design better than humans, what will we do?
This is not an entirely new question. Ever since the Industrial Revolution erupted, people feared that mechanisation might cause mass unemployment. This never happened, because as old professions became obsolete, new professions evolved, and there was always something humans could do better than machines. Yet this is not a law of nature, and nothing guarantees it will continue to be like that in the future. Humans have two basic types of abilities: physical and cognitive. As long as machines competed with humans merely in physical abilities, there were countless cognitive tasks that humans performed better. So as machines took over purely manual jobs, humans focused on jobs requiring at least some cognitive skills. Yet what will happen once algorithms outperform us in remembering, analysing and recognising patterns?
The idea that humans will always have a unique ability beyond the reach of non-conscious algorithms is just wishful thinking. The current scientific answer to this pipe dream can be summarised in three simple principles:
1. Organisms are algorithms. Every animal – including Homo sapiens – is an assemblage of organic algorithms shaped by natural selection over millions of years of evolution.
2. Algorithmic calculations are not affected by the materials from which the calculator is built. Whether an abacus is made of wood, iron or plastic, two beads plus two beads equals four beads.
3. Hence there is no reason to think that organic algorithms can do things that non-organic algorithms will never be able to replicate or surpass. As long as the calculations remain valid, what does it matter whether the algorithms are manifested in carbon or silicon?
True, at present there are numerous things that organic algorithms do better than non-organic ones, and experts have repeatedly declared that something will ‘for ever’ remain beyond the reach of non-organic algorithms. But it turns out that ‘for ever’ often means no more than a decade or two. Until a short time ago facial recognition was a favourite example of something that even babies accomplish easily but which escaped even the most powerful computers. Today facial-recognition programs are able to identify people far more efficiently and quickly than humans can. Police forces and intelligence services now routinely use such programs to scan countless hours of video footage from surveillance cameras in order to track down suspects and criminals.
In the 1980s when people discussed the unique nature of humanity, they habitually used chess as primary proof of human superiority. They believed that computers would never beat humans at chess. On 10 February 1996, IBM’s Deep Blue defeated world chess champion Garry Kasparov, laying to rest that particular claim for human pre-eminence.
Deep Blue was given a head start by its creators, who preprogrammed it not only with the basic rules of chess, but also with detailed instructions regarding chess strategies. A new generation of AI prefers machine learning to human advice. In February 2015 a program developed by Google DeepMind learned by itself how to play forty-nine classic Atari games. One of the developers, Dr Demis Hassabis, explained that ‘the only information we gave the system was the raw pixels on the screen and the idea that it had to get a high score. And everything else it had to figure out by itself.’ The program managed to learn the rules of all the games presented to it, from Pac-Man and Space Invaders to car racing and tennis games. It then played most of them as well as or better than humans, sometimes coming up with strategies that never occur to human players.13
45. Deep Blue defeating Garry Kasparov.
45. © STAN HONDA/AFP/Getty Images.
Shortly afterwards AI scored an even more sensational success, when Google’s AlphaGo software taught itself how to play Go, an ancient Chinese strategy board game significantly more complex than chess. Go’s intricacies were long considered far beyond the reach of AI programs. In March 2016 a match was held in Seoul between AlphaGo and the South Korean Go champion, Lee Sedol. AlphaGo trounced Lee 4–1 by employing unorthodox moves and original strategies that stunned the experts. Whereas prior to the match most professional Go players were certain that Lee would win, after analysing AlphaGo’s moves most concluded that the game was up and that humans no longer had any hope of beating AlphaGo and its progeny.
Computer algorithms have recently proven their worth in ball games, too. For many decades, baseball teams used the wisdom, experience and gut instincts of professional scouts and managers to pick players. The best players fetched millions of dollars, and naturally enough the rich teams grabbed the cream of the crop, whereas poorer teams had to settle for the scraps. In 2002 Billy Beane, the manager of the low-budget Oakland Athletics, decided to beat the system. He relied on an arcane computer algorithm developed by economists and computer geeks to create a winning team from players whom human scouts had overlooked or undervalued. Old-timers were incensed that Beane’s algorithm had violated the hallowed halls of baseball. They insisted that picking baseball players is an art, and that only humans with an intimate and long-standing experience of the game can master it. A computer program could never do it, because it could never decipher the secrets and the spirit of baseball.
They soon had to eat their baseball caps. Beane’s shoestring-budget ($44 million) algorithmic team not only held its own against baseball giants such as the New York Yankees ($125 million), but became the first team in American League history ever to win twenty consecutive games. Not that Beane and Oakland got to enjoy their success for long. Soon enough many other teams adopted the same algorithmic approach, and since the Yankees and Red Sox could pay far more for both baseball players and computer software, low-budget teams such as the Oakland Athletics ended up having an even smaller chance of beating the system than before.14
In 2004 Professor Frank Levy from MIT and Professor Richard Murnane from Harvard published thorough research of the job market, listing those professions most likely to undergo automation. Truck driving was given as an example of a job that could not possibly be automated in the foreseeable future. It is hard to imagine, they wrote, that algorithms could safely drive trucks on a busy road. A mere ten years later Google and Tesla can not only imagine this, but are actually making it happen.15
In fact, as time goes by it becomes easier and easier to replace humans with computer algorithms, not merely because the algorithms are getting smarter, but also because humans are professionalising. Ancient hunter-gatherers mastered a very wide variety of skills in order to survive, which is why it would be immensely difficult to design a robotic hunter-gatherer. Such a robot would have to know how to prepare spear points from flint stones, find edible mushrooms in a forest, track down a mammoth and coordinate a charge with a dozen other hunters, and afterwards use medicinal herbs to bandage any wounds. However, over the last few thousand years we humans have been specialising. A taxi driver or a cardiologist specialises in a much narrower niche than a hunter-gatherer, which makes it easier to replace them with AI. As I have repeatedly stressed, AI is nowhere near human-like existence. But 99 per cent of human qualities and abilities are simply redundant for the performance of most modern jobs. For AI to squeeze humans out of the job market it needs only outperform us in the specific abilities a particular profession demands.
Even the managers in charge of all these activities can be replaced. Thanks to its powerful algorithms, Uber can manage millions of taxi drivers with only a handful of humans. Most of the commands are given by the algorithms without any need of human supervision.16 In May 2014 Deep Knowledge Ventures – a Hong Kong venture-capital firm specialising in regenerative medicine – broke new ground by appointing an algorithm named VITAL to its board. VITAL makes investment recommendations by analysing huge amounts of data regarding the financial situation, clinical trials and intellectual property of prospective companies. Like the other five board members, the algorithm gets to vote on whether or not the firm makes an investment in a specific company.
Examining VITAL’s record so far, it seems that it has already picked up at least one managerial vice: nepotism. It has recommended investing in companies that grant algorithms more authority. For example, with VITAL’s blessing, Deep Knowledge Ventures recently invested in Pathway Pharmaceuticals, which employs an algorithm called OncoFinder to select and rate personalised cancer therapies.17
As algorithms push humans out of the job market, wealth and power might become concentrated in the hands of the tiny elite that owns the all-powerful algorithms, creating unprecedented social and political inequality. Today millions of taxi drivers, bus drivers and truck drivers have significant economic and political clout, each commanding a tiny share of the transportation market. If their collective interests are threatened, they can unionise, go on strike, stage boycotts and create powerful voting blocks. However, once millions of human drivers are replaced by a single algorithm, all that wealth and power will be cornered by the corporation that owns the algorithm, and by the handful of billionaires who own the corporation. Alternatively, the algorithms might themselves become the owners. Human law already recognises intersubjective entities like corporations and nations as ‘legal persons’. Though Toyota or Argentina has neither a body nor a mind, they are subject to international laws, they can own land and money, and they can sue and be sued in court. We might soon grant similar status to algorithms. An algorithm could then own a transportation empire or a venture-capital fund without having to obey the wishes of any human master.
If the algorithm makes the right decisions, it could accumulate a fortune, which it could then invest as it sees fit, perhaps buying your house and becoming your landlord. If you infringe on the algorithm’s legal rights – say, by not paying rent – the algorithm could hire lawyers and sue you in court. If such algorithms consistently outperform human capitalists, we might end up with an algorithmic upper class owning most of our planet. This may sound impossible, but before dismissing the idea, remember that most of our planet is already legally owned by non-human intersubjective entities, namely nations and corporations. Indeed, 5,000 years ago much of Sumer was owned by imaginary gods such as Enki and Inanna. If gods can possess land and employ people, why not algorithms?
So what will people do? Art is often said to provide us with our ultimate (and uniquely human) sanctuary. In a world where computers have replaced doctors, drivers, teachers and even landlords, would everyone become an artist? Yet it is hard to see why artistic creation would be safe from the algorithms. Why are we so confident that computers will never be able to outdo us in the composition of music? According to the life sciences, art is not the product of some enchanted spirit or metaphysical soul, but rather of organic algorithms recognising mathematical patterns. If so, there is no reason why non-organic algorithms couldn’t master it.
David Cope is a musicology professor at the University of California in Santa Cruz. He is also one of the more controversial figures in the world of classical music. Cope has written computer programs that compose concertos, chorales, symphonies and operas. His first creation was named EMI (Experiments in Musical Intelligence), which specialised in imitating the style of Johann Sebastian Bach. It took seven years to create the program, but once the work was done EMI composed 5,000 chorales à la Bach in a single day. Cope arranged for a performance of a few select chorales at a music festival in Santa Cruz. Enthusiastic members of the audience praised the stirring performance, and explained excitedly how the music had touched their innermost being. They didn’t know that it had been created by EMI rather than Bach, and when the truth was revealed some reacted with glum silence, while others shouted in anger.
EMI continued to improve and learned to imitate Beethoven, Chopin, Rachmaninov and Stravinsky. Cope got EMI a contract, and its first album – Classical Music Composed by Computer – sold surprisingly well. Publicity brought increasing hostility from classical-music buffs. Professor Steve Larson from the University of Oregon sent Cope a challenge for a musical showdown. Larson suggested that professional pianists play three pieces one after the other: one each by Bach, by EMI, and by Larson himself. The audience would then be asked to vote on who composed which piece. Larson was convinced that people would easily distinguish between soulful human compositions and the lifeless artefact of a machine. Cope accepted the challenge. On the appointed date hundreds of lecturers, students and music fans assembled in the University of Oregon’s concert hall. At the end of the performance, a vote was taken. The result? The audience thought that EMI’s piece was genuine Bach, that Bach’s piece was composed by Larson, and that Larson’s piece was produced by a computer.
Critics continued to argue that EMI’s music is technically excellent, but that it lacks something. It is too accurate. It has no depth. It has no soul. Yet when people heard EMI’s compositions without being informed of their provenance, they frequently praised them precisely for their soulfulness and emotional resonance.
Following EMI’s successes Cope created newer and even more sophisticated programs. His crowning achievement was Annie. Whereas EMI composed music according to predetermined rules, Annie is based on machine learning. Its musical style constantly changes and develops in response to new inputs from the outside world. Cope has no idea what Annie is going to compose next. Indeed, Annie does not restrict itself to music composition but also explores other art forms such as haiku poetry. In 2011 Cope published Comes the Fiery Night: 2,000 Haiku by Man and Machine. Some of the haiku were written by Annie, and the rest by organic poets. The book does not disclose which are which. If you think you can tell the difference between human creativity and machine output, you are welcome to test your claim.18
In the nineteenth century the Industrial Revolution created a huge urban proletariat, and socialism spread because no other creed managed to answer the unprecedented needs, hopes and fears of this new working class. Liberalism eventually defeated socialism only by adopting the best parts of the socialist programme. In the twenty-first century we might witness the creation of a massive new unworking class: people devoid of any economic, political or even artistic value, who contribute nothing to the prosperity, power and glory of society. This ‘useless class’ will not merely be unemployed – it will be unemployable.
In September 2013 two Oxford researchers, Carl Benedikt Frey and Michael A. Osborne, published ‘The Future of Employment’, in which they surveyed the likelihood of different professions being taken over by computer algorithms within the next twenty years. The algorithm developed by Frey and Osborne to do the calculations estimated that 47 per cent of US jobs are at high risk. For example, there is a 99 per cent probability that by 2033 human telemarketers and insurance underwriters will lose their jobs to algorithms. There is a 98 per cent probability that the same will happen to sports referees, 97 per cent that it will happen to cashiers and 96 per cent to chefs. Waiters – 94 per cent. Paralegal assistants – 94 per cent. Tour guides – 91 per cent. Bakers – 89 per cent. Bus drivers – 89 per cent. Construction labourers – 88 per cent. Veterinary assistants – 86 per cent. Security guards – 84 per cent. Sailors – 83 per cent. Bartenders – 77 per cent. Archivists – 76 per cent. Carpenters – 72 per cent. Lifeguards – 67 per cent. And so forth. There are of course some safe jobs. The likelihood that computer algorithms will displace archaeologists by 2033 is only 0.7 per cent, because their job requires highly sophisticated types of pattern recognition, and doesn’t produce huge profits. Hence it is improbable that corporations or government will make the necessary investment to automate archaeology within the next twenty years.19
Of course, by 2033 many new professions are likely to appear, for example, virtual-world designers. But such professions will probably require much more creativity and flexibility than current run-of-the-mill jobs, and it is unclear whether forty-year-old cashiers or insurance agents will be able to reinvent themselves as virtual-world designers (try to imagine a virtual world created by an insurance agent!). And even if they do so, the pace of progress is such that within another decade they might have to reinvent themselves yet again. After all, algorithms might well outperform humans in designing virtual worlds too. The crucial problem isn’t creating new jobs. The crucial problem is creating new jobs that humans perform better than algorithms.20
Since we do not know how the job market would look in 2030 or 2040, already today we have no idea what to teach our kids. Most of what they currently learn at school will probably be irrelevant by the time they are forty. Traditionally, life has been divided into two main parts: a period of learning followed by a period of working. Very soon this traditional model will become utterly obsolete, and the only way for humans to stay in the game will be to keep learning throughout their lives, and to reinvent themselves repeatedly. Many if not most humans may be unable to do so.
The coming technological bonanza will probably make it feasible to feed and support these useless masses even without any effort from their side. But what will keep them occupied and content? People must do something, or they go crazy. What will they do all day? One answer might be drugs and computer games. Unnecessary people might spend increasing amounts of time within 3D virtual-reality worlds that would provide them with far more excitement and emotional engagement than the drab reality outside. Yet such a development would deal a mortal blow to the liberal belief in the sacredness of human life and of human experiences. What’s so sacred about useless bums who pass their days devouring artificial experiences in La La Land?
Some experts and thinkers, such as Nick Bostrom, warn that humankind is unlikely to suffer this degradation, because once artificial intelligence surpasses human intelligence, it might simply exterminate humankind. The AI would likely do so either for fear that humankind would turn against it and try to pull its plug, or in pursuit of some unfathomable goal of its own. For it would be extremely difficult for humans to control the motivation of a system smarter than themselves.
Even preprogramming the system with seemingly benign goals might backfire horribly. One popular scenario imagines a corporation designing the first artificial super-intelligence and giving it an innocent test such as calculating pi. Before anyone realises what is happening, the AI takes over the planet, eliminates the human race, launches a campaign of conquest to the ends of the galaxy, and transforms the entire known universe into a giant super-computer that for billions upon billions of years calculates pi ever more accurately. After all, this is the divine mission its Creator gave it.21
A Probability of 87 Per Cent
At the beginning of this chapter we identified several practical threats to liberalism. The first is that humans might become militarily and economically useless. This is just a possibility, of course, not a prophecy. Technical difficulties or political objections might slow down the algorithmic invasion of the job market. Alternatively, since much of the human mind is still uncharted territory, we don’t really know what hidden talents humans might discover in themselves, and what novel jobs they might create to offset the loss of others. That, however, may not be enough to save liberalism. For liberalism believes not just in the value of human beings – it also believes in individualism. The second threat facing liberalism is that, while the system might still need humans in the future, it will not need individuals. Humans will continue to compose music, teach physics and invest money, but the system will understand these humans better than they understand themselves and will make most of the important decisions for them. The system will thereby deprive individuals of their authority and freedom.
The liberal belief in individualism is founded on the three important assumptions that we discussed:
1. I am an in-dividual – that is, I have a single essence that cannot be divided into parts or subsystems. True, this inner core is wrapped in many outer layers. But if I make the effort to peel away these external crusts, I will find deep within myself a clear and single inner voice, which is my authentic self.
2. My authentic self is completely free.
3. It follows from the first two assumptions that I can know things about myself nobody else can discover. For only I have access to my inner space of freedom, and only I can hear the whispers of my authentic self. This is why liberalism grants the individual so much authority. I cannot trust anyone else to make choices for me, because no one else can know who I really am, how I feel and what I want. This is why the voter knows best, why the customer is always right and why beauty is in the eye of the beholder.
However, the life sciences challenge all three assumptions. According to them:
1. Organisms are algorithms, and humans are not individuals – they are ‘dividuals’. That is, humans are an assemblage of many different algorithms lacking a single inner voice or a single self.
2. The algorithms constituting a human are not free. They are shaped by genes and environmental pressures, and take decisions either deterministically or randomly – but not freely.
3. It follows that an external algorithm could theoretically know me much better than I can ever know myself. An algorithm that monitors each of the systems that comprise my body and my brain could know exactly who I am, how I feel and what I want. Once developed, such an algorithm could replace the voter, the customer and the beholder. Then the algorithm will know best, the algorithm will always be right, and beauty will be in the calculations of the algorithm.
During the nineteenth and twentieth centuries the belief in individualism nevertheless made good practical sense, because there were no external algorithms that could actually monitor me effectively. States and markets may have wished to do exactly that, but they lacked the necessary technology. The KGB and FBI had only a vague understanding of my biochemistry, genome and brain, and even if agents bugged every phone call I made and recorded every chance encounter on the street, they did not have the computing power to analyse all that data. Consequently, given twentieth-century technological conditions, liberals were right to argue that nobody can know me better than I know myself. Humans therefore had a very good reason to regard themselves as an autonomous system and to follow their own inner voices rather than the commands of Big Brother.
However, twenty-first-century technology may enable external algorithms to ‘hack humanity’ and know me far better than I know myself. Once this happens, the belief in individualism will collapse and authority will shift from individual humans to networked algorithms. People will no longer see themselves as autonomous beings running their lives according to their wishes, but instead will become accustomed to seeing themselves as a collection of biochemical mechanisms that is constantly monitored and guided by a network of electronic algorithms. For this to happen, there is no need of an external algorithm that knows me perfectly and never makes any mistake; it is enough that the algorithm will know me better than I know myself, and will make fewer mistakes than I do. It will then make sense to trust this algorithm with more and more of my decisions and life choices.
We have already crossed this line as far as medicine is concerned. In hospitals we are no longer individuals. It is highly likely that during your lifetime many of the most momentous decisions about your body and health will be taken by computer algorithms such as IBM’s Watson. And this is not necessarily bad news. Diabetics already carry sensors that automatically check their sugar level several times a day, alerting them whenever it crosses a dangerous threshold. In 2014 researchers at Yale University announced the first successful trial of an ‘artificial pancreas’ controlled by an iPhone. Fifty-two diabetics took part in the experiment. Each patient had a tiny sensor and a tiny pump implanted in his or her abdomen. The pump was connected to small tubes of insulin and glucagon, two hormones that together regulate sugar levels in the blood. The sensor constantly measured the sugar level, transmitting the data to an iPhone. The iPhone hosted an application that analysed the information, and whenever necessary gave orders to the pump, which injected measured amounts of either insulin or glucagon – without any need of human intervention.22
Many other people who suffer from no serious illnesses have begun to use wearable sensors and computers to monitor their health and activities. These devices – incorporated into anything from smartphones and wristwatches to armbands and underwear – record diverse biometric data such as blood pressure and heart rate. The data is then fed into sophisticated computer programs that advise the wearer on how to alter his or her diet and daily routines in order to enjoy improved health and a longer and more productive life.23 Google, together with the drug giant Novartis, is developing a contact lens that checks glucose levels in the blood every few seconds by analysing the composition of tears.24 Pixie Scientific sells ‘smart diapers’ that analyse baby poop for clues about the child’s medical condition. In November 2014 Microsoft launched the Microsoft Band – a smart armband that monitors among other things your heartbeat, the quality of your sleep and the number of steps you take each day. An application called Deadline goes a step further, informing you of how many years of life you have left, given your current habits.
Some people use these apps without thinking too deeply about it, but for others this is already an ideology, if not a religion. The Quantified Self movement argues that the self is nothing but mathematical patterns. These patterns are so complex that the human mind has no chance of understanding them. So if you wish to obey the old adage and know thyself, you should not waste your time on philosophy, meditation or psychoanalysis, but rather you should systematically collect biometric data and allow algorithms to analyse them for you and tell you who you are and what you should do. The movement’s motto is ‘Self-knowledge through numbers’.25
In 2000 the Israeli singer Shlomi Shavan conquered the local playlists with his hit song ‘Arik’. It’s about a guy who is obsessed with his girlfriend’s ex, Arik. He demands to know who is better in bed – he, or Arik? The girlfriend dodges the question, saying that it was different with each of them. The guy is not satisfied and demands: ‘Talk numbers, lady.’ Well, precisely for such guys, a company called Bedpost sells biometric armbands that you can wear while having sex. The armband collects data such as heart rate, sweat level, duration of sexual intercourse, duration of orgasm and the number of calories you burned. The data is fed into a computer that analyses the information and ranks your performance with precise numbers. No more fake orgasms and ‘How was it for you?’26
People who experience themselves through the unrelenting mediation of such devices may begin to see themselves more as a collection of biochemical systems than as individuals, and their decisions will increasingly reflect the conflicting demands of the various systems.27 Suppose you have two free hours a week, and are uncertain whether to use them playing chess or tennis. A good friend might ask: ‘What does your heart tell you?’ ‘Well,’ you answer, ‘as far as my heart is concerned, it’s obvious tennis is better. It’s also better for my cholesterol level and blood pressure. But my fMRI scans indicate I should strengthen my left pre-frontal cortex. In my family dementia is quite common, and my uncle had it at a very early age. The latest studies indicate that a weekly game of chess can help delay its onset.’
You can already find much more extreme examples of external mediation in the geriatric wards of hospitals. Humanism fantasises about old age as a period of wisdom and awareness. The ideal elder may suffer from bodily ailments and weaknesses, but his mind is quick and sharp, and he has eighty years of insights to dispense. He knows exactly what’s what, and always has astute advice for the grandchildren and other visitors. Twenty-first-century octogenarians don’t always conform to that image. Thanks to our growing understanding of human biology, medicine can keep us alive long enough for our minds and ‘authentic selves’ to disintegrate and dissolve. All too often, what’s left is a collection of dysfunctional biological systems kept going by a collection of monitors, computers and pumps.
At a deeper level, as genetic technologies are integrated into daily life and people develop increasingly intimate relations with their DNA, the single self might blur even further and the authentic inner voice might dissolve into a noisy crowd of genes. When faced by difficult dilemmas and decisions, I may stop searching for my inner voice and instead consult my inner genetic parliament.
On 14 May 2013 actress Angelina Jolie published an article in the New York Times about her decision to have a double mastectomy. Jolie had lived for years under the shadow of breast cancer, as both her mother and grandmother died of it at a relatively early age. Jolie herself did a genetic test that confirmed she was carrying a dangerous mutation of the BRCA1 gene. According to recent statistical surveys, women carrying this mutation have an 87 per cent probability of developing breast cancer. Even though at the time Jolie did not have cancer, she decided to pre-empt the dreaded disease and have a double mastectomy. In the article Jolie explained that ‘I choose not to keep my story private because there are many women who do not know that they might be living under the shadow of cancer. It is my hope that they, too, will be able to get gene-tested, and that if they have a high risk they, too, will know that they have strong options.’28
Deciding whether or not to undergo a mastectomy is a difficult and potentially fatal choice. Beyond the discomforts, dangers and financial costs of the operation and its follow-up treatments, the decision can have far-reaching effects on one’s health, body image, emotional well-being and relationships. Jolie’s choice, and the courage she showed in going public with it, caused a great stir and won her international acclaim and admiration. In particular, many hoped that the publicity would increase awareness of genetic medicine and its potential benefits.
From a historical perspective, it is interesting to note the critical role algorithms played in her case. When Jolie had to take such an important decision about her life, she did not climb a mountaintop overlooking the ocean, watch the sun set into the waves and attempt to connect to her innermost feelings. Instead, she preferred to listen to her genes, whose voice manifested not in feelings but in numbers. At the time, Jolie felt no pain or discomfort whatsoever. Her feelings told her: ‘Relax, everything is perfectly fine.’ But the computer algorithms used by her doctors told a different story: ‘You don’t feel anything is wrong, but there is a time bomb ticking in your DNA. Do something about it – now!’
Of course, Jolie’s emotions and unique personality played a key part too. If another woman with a different personality had discovered she was carrying the same genetic mutation, she might well have decided not to undergo a mastectomy. However – and here we enter the twilight zone – what if that other woman had discovered she was carrying not only the dangerous BRCA1 mutation, but another mutation in the (fictional) gene ABCD3, which impairs a brain area responsible for evaluating probabilities, thereby causing people to underestimate dangers? What if a statistician pointed out to this woman that her mother, grandmother and several other relatives all died young because they underestimated various health risks and failed to take precautionary measures?
In all likelihood, you too will make important decisions about your health in the same way as Angelina Jolie. You will undergo a genetic test, a blood test or an fMRI; an algorithm will analyse the results on the basis of enormous statistical databases; and you will then accept the algorithm’s recommendation. This is not an apocalyptic scenario. Algorithms won’t revolt and enslave us. Rather, they will be so good at making decisions for us that it would be madness not to follow their advice.
Angelina Jolie’s first leading role was in the 1993 science-fiction action film Cyborg 2. She played Casella Reese, a cyborg developed in the year 2074 by Pinwheel Robotics for corporate espionage and assassination. Casella is programmed with human emotions, in order to blend better into human societies while pursuing her missions. When Casella discovers that Pinwheel Robotics not only controls her, but also intends to terminate her, she escapes and fights for her life and freedom. Cyborg 2 is a liberal fantasy about an individual fighting for liberty and privacy against global corporate octopuses.
In her real life Jolie preferred to sacrifice privacy and autonomy for health. A similar desire to improve human health may well cause most of us to willingly dismantle the barriers protecting our private spaces and allow state bureaucracies and multinational corporations access to our innermost recesses. For instance, allowing Google to read our emails and follow our activities would make it possible for Google to alert us to brewing epidemics before they are noticed by traditional health services.
How does the UK National Health Service know that a flu epidemic has erupted in London? By analysing the reports of thousands of doctors in hundreds of clinics. And how do all these doctors get the information? Well, when Mary wakes up one morning feeling a bit under the weather, she doesn’t run straight to her doctor. She waits a few hours, or even a day or two, hoping that a nice cup of tea with honey will do the trick. When things don’t improve, she makes an appointment with the doctor, goes to the clinic and describes her symptoms. The doctor types the data into a computer, and hopefully somebody up in NHS headquarters analyses these data, together with reports streaming in from thousands of other doctors, and concludes that flu is on the march. All this takes a lot of time.
Google could do it in minutes. It needs to merely monitor the words Londoners type in their emails and in Google’s search engine and cross-reference them with a database of disease symptoms. Suppose on an average day the words ‘headache’, ‘fever’, ‘nausea’ and ‘sneezing’ appear 100,000 times in London emails and searches. If today the Google algorithm notices they appear 300,000 times, then bingo! We have a flu epidemic. There is no need to wait till Mary goes to her doctor. On the very first morning she woke up feeling a bit unwell and before going to work she emailed a colleague, ‘I have a headache, but I’ll be there.’ That’s all Google needs.
However, for Google to work its magic Mary must allow Google not only to read her messages, but also to share the information with the health authorities. If Angelina Jolie was willing to sacrifice her privacy in order to raise awareness of breast cancer, why shouldn’t Mary make a similar sacrifice in order to thwart epidemics?
This isn’t a theoretical idea. In 2008 Google actually launched Google Flu Trends, that tracks flu outbreaks by monitoring Google searches. The service is still being developed, and due to privacy limitations it tracks only search words and allegedly avoids reading private emails. But it is already capable of ringing the flu alarm bells ten days before traditional health services.29
The Google Baseline Study is an even more ambitious project. Google intends to build a mammoth database on human health, establishing the ‘perfect health’ profile. Identifying even the smallest deviations from the baseline will hopefully make it possible to alert people to burgeoning health problems such as cancer when they can be nipped in the bud. The Baseline Study dovetails with an entire line of products called Google Fit that will be incorporated into wearables such as clothes, bracelets, shoes and glasses. The idea is for Google Fit products to collect the never-ending stream of biological data to feed the Baseline Study.30
Yet companies such as Google want to go much deeper than wearables. The market for DNA testing is currently growing in leaps and bounds. One of its leaders is 23andMe, a private company founded by Anne Wojcicki, former wife of Google co-founder Sergey Brin. The name ‘23andMe’ refers to the twenty-three pairs of chromosomes that encode the human genome, the message being that my chromosomes have a very special relationship with me. Whoever can understand what the chromosomes are saying can tell you things about yourself that you never even suspected.
If you want to know what, pay 23andMe a mere $99, and they will send you a small package with a tube. You spit into the tube, seal it and mail it to Mountain View, California. There the DNA in your saliva is read, and you receive the results online. You get a list of the potential health hazards you face, and your genetic predisposition to more than ninety traits and conditions ranging from baldness to blindness. ‘Know thyself’ was never easier or cheaper. Since it is all based on statistics, the size of the company’s database is the key to making accurate predictions. Hence the first company to build a giant genetic database will provide customers with the best predictions, and will potentially corner the market. US biotech companies are increasingly worried that strict privacy laws in the USA combined with Chinese disregard for individual privacy may hand China the genetic market on a plate.
If we connect all the dots, and if we give Google and its competitors free access to our biometric devices, to our DNA scans and to our medical records, we will get an all-knowing medical health service that will not only fight epidemics, but will also shield us from cancer, heart attacks and Alzheimer’s. Yet with such a database at its disposal, Google could do far more. Imagine a system that, in the words of the famous Police song, watches every breath you take, every move you make and every bond you break; a system that monitors your bank account and your heartbeat, your sugar levels and your sexual escapades. It will definitely know you much better than you know yourself. The self-deceptions and self-delusions that trap people in bad relationships, wrong careers and harmful habits will not fool Google. Unlike the narrating self that controls us today, Google will not make decisions on the basis of cooked-up stories, and will not be misled by cognitive short cuts and the peak-end rule. Google will actually remember every step we took and every hand we shook.
Many of us would be happy to transfer much of our decision-making processes into the hands of such a system, or at least consult with it whenever we face important choices. Google will advise us which movie to see, where to go on holiday, what to study in college, which job offer to accept, and even whom to date and marry. ‘Listen, Google,’ I will say, ‘both John and Paul are courting me. I like both of them, but in different ways, and it’s so hard to make up my mind. Given everything you know, what do you advise me to do?’
And Google will answer: ‘Well, I’ve known you from the day you were born. I have read all your emails, recorded all your phone calls, and know your favourite films, your DNA and the entire biometric history of your heart. I have exact data about each date you went on, and, if you want, I can show you second-by-second graphs of your heart rate, blood pressure and sugar levels whenever you went on a date with John or Paul. If necessary, I can even provide you with an accurate mathematical ranking of every sexual encounter you had with either of them. And naturally, I know them as well as I know you. Based on all this information, on my superb algorithms, and on decades’ worth of statistics about millions of relationships – I advise you to go with John, with an 87 per cent probability that you will be more satisfied with him in the long run.
‘Indeed, I know you so well that I also know you don’t like this answer. Paul is much more handsome than John, and because you give external appearances too much weight, you secretly wanted me to say “Paul”. Looks matter, of course, but not as much as you think. Your biochemical algorithms – which evolved tens of thousands of years ago on the African savannah – give looks a weight of 35 per cent in their overall rating of potential mates. My algorithms – which are based on the most up-to-date studies and statistics – say that looks have only a 14 per cent impact on the long-term success of romantic relationships. So, even though I took Paul’s looks into account, I still tell you that you would be better off with John.’31
In exchange for such devoted counselling services, we will just have to give up the idea that humans are individuals, and that each human has a free will determining what’s good, what’s beautiful and what is the meaning of life. Humans will no longer be autonomous entities directed by the stories their narrating self invents. Instead, they will be integral parts of a huge global network.
Liberalism sanctifies the narrating self, and allows it to vote in the polling stations, in the supermarket and in the marriage market. For centuries this made good sense, because though the narrating self believed in all kinds of fictions and fantasies, no alternative system knew me better. Yet once we have a system that really does know me better, it will be foolhardy to leave authority in the hands of the narrating self.
Liberal habits such as democratic elections will become obsolete, because Google will be able to represent even my own political opinions better than I can. When I stand behind the curtain in the polling booth, liberalism instructs me to consult my authentic self, and choose whichever party or candidate reflects my deepest desires. Yet the life sciences point out that when I stand there behind that curtain, I don’t really remember everything I felt and thought in the years since the last election. Moreover, I am bombarded by a barrage of propaganda, spin and random memories that might well distort my choices. Just as in Kahneman’s cold-water experiment, in politics too the narrating self follows the peak-end rule. It forgets the vast majority of events, remembers only a few extreme incidents and gives a wholly disproportionate weight to recent happenings.
For four long years I may have repeatedly complained about the PM’s policies, telling myself and anyone willing to listen that he will be ‘the ruin of us all’. However, in the months prior to the elections the government cuts taxes and spends money generously. The ruling party hires the best copywriters to lead a brilliant campaign, with a well-balanced mixture of threats and promises that speak directly to the fear centre in my brain. On the morning of the election I wake up with a cold, which impacts my mental processes and induces me to prefer security and stability over all other considerations. And voila! I send the man who will be ‘the ruin of us all’ back into office for another four years.
I could have saved myself from such a fate if only I had authorised Google to vote for me. Google wasn’t born yesterday, you know. Though it won’t ignore the recent tax cuts and the election promises, it will also remember what happened throughout the previous four years. It will know what my blood pressure was every time I read the morning newspapers, and how my dopamine level plummeted while I watched the evening news. Google will know how to screen the spin-doctors’ empty slogans. Google will understand that illness makes voters lean a bit more to the right than usual, and will compensate for this. Google will therefore be able to vote not according to my momentary state of mind, and not according to the fantasies of the narrating self, but rather according to the real feelings and interests of the collection of biochemical algorithms known as ‘I’.
Naturally, Google will not always get it right. After all, these are all just probabilities. But if Google makes enough good decisions, people will grant it increasing authority. As time goes by, the databases will grow, the statistics will become more accurate, the algorithms will improve and the decisions will be even better. The system will never know me perfectly, and will never be infallible. But there is no need for that. Liberalism will collapse on the day the system knows me better than I know myself. Which is less difficult than it may sound, given that most people don’t really know themselves well.
A recent study commissioned by Google’s nemesis – Facebook – has indicated that already today the Facebook algorithm is a better judge of human personalities and dispositions than even people’s friends, parents and spouses. The study was conducted on 86,220 volunteers who have a Facebook account and who completed a hundred-item personality questionnaire. The Facebook algorithm predicted the volunteers’ answers based on monitoring their Facebook Likes – which webpages, images and clips they tagged with the Like button. The more Likes, the more accurate the predictions. The algorithm’s predictions were compared with those of work colleagues, friends, family members and spouses. Amazingly, the algorithm needed a set of only ten Likes in order to outperform the predictions of work colleagues. It needed seventy Likes to outperform friends, 150 Likes to outperform family members and 300 Likes to outperform spouses. In other words, if you happen to have clicked 300 Likes on your Facebook account, the Facebook algorithm can predict your opinions and desires better than your husband or wife!
Indeed, in some fields the Facebook algorithm did better than the person themself. Participants were asked to evaluate things such as their level of substance use or the size of their social networks. Their judgements were less accurate than those of the algorithm. The research concludes with the following prediction (made by the human authors of the article, not by the Facebook algorithm): ‘People might abandon their own psychological judgements and rely on computers when making important life decisions, such as choosing activities, career paths, or even romantic partners. It is possible that such data-driven decisions will improve people’s lives.’32
On a more sinister note, the same study implies that in future US presidential elections Facebook could know not only the political opinions of tens of millions of Americans, but also who among them are the critical swing voters, and how these voters might be swung. Facebook could tell that in Oklahoma the race between Republicans and Democrats is particularly close, identify the 32,417 voters who still haven’t made up their minds, and determine what each candidate needs to say in order to tip the balance. How could Facebook obtain this priceless political data? We provide it for free.
In the heyday of European imperialism, conquistadors and merchants bought entire islands and countries in exchange for coloured beads. In the twenty-first century our personal data is probably the most valuable resource most humans still have to offer, and we are giving it to the tech giants in exchange for email services and funny cat videos.
From Oracle to Sovereign
Once Google, Facebook and other algorithms become all-knowing oracles, they may well evolve into agents and ultimately into sovereigns.33 To understand this trajectory, consider the case of Waze – a GPS-based navigational application that many drivers use nowadays. Waze isn’t just a map. Its millions of users constantly update it about traffic jams, car accidents and police cars. Hence Waze knows to divert you away from heavy traffic, and bring you to your destination through the quickest possible route. When you reach a junction and your gut instinct tells you to turn right, but Waze instructs you to turn left, users sooner or later learn that they had better listen to Waze rather than to their feelings.34
At first sight it seems that the Waze algorithm serves only as an oracle. You ask a question, the oracle replies, but it is up to you to make a decision. If the oracle wins your trust, however, the next logical step is to turn it into an agent. You give the algorithm only a final aim, and it acts to realise that aim without your supervision. In the case of Waze, this may happen when you connect Waze to a self-driving car, and tell Waze ‘take the fastest route home’ or ‘take the most scenic route’ or ‘take the route which will result in the minimum amount of pollution’. You call the shots, but leave it to Waze to execute your commands.
Finally, Waze might become sovereign. Having so much power in its hands, and knowing far more than you, it may start manipulating you and the other drivers, shaping your desires and making your decisions for you. For example, suppose because Waze is so good, everybody starts using it. And suppose there is a traffic jam on route no. 1, while the alternative route no. 2 is relatively open. If Waze simply lets everybody know that, then all drivers will rush to route no. 2, and it too will be clogged. When everybody uses the same oracle, and everybody believes the oracle, the oracle turns into a sovereign. So Waze must think for us. Maybe it will inform only half the drivers that route no. 2 is open, while keeping this information secret from the other half. Thereby pressure will ease on route no. 1 without blocking route no. 2.
Microsoft is developing a far more sophisticated system called Cortana, named after an AI character in its popular Halo video-game series. Cortana is an AI personal assistant that Microsoft hopes to include as an integral feature of future versions of Windows. Users will be encouraged to allow Cortana access to all their files, emails and applications, so that it will get to know them and can thereby offer advice on myriad matters, as well as becoming a virtual agent representing the user’s interests. Cortana could remind you to buy something for your wife’s birthday, select the present, reserve a table at a restaurant and prompt you to take your medicine an hour before dinner. It could alert you that if you don’t stop reading now, you will be late for an important business meeting. As you are about to enter the meeting, Cortana will warn that your blood pressure is too high and your dopamine level too low, and based on past statistics, you tend to make serious business mistakes in such circumstances. So you had better keep things tentative and avoid committing yourself or signing any deals.
Once Cortanas evolve from oracles to agents, they might start speaking directly with one another on their masters’ behalf. It can begin innocently enough, with my Cortana contacting your Cortana to agree on a place and time for a meeting. Next thing I know, a potential employer will tell me not to bother sending a CV, but simply allow his Cortana to grill my Cortana. Or my Cortana may be approached by the Cortana of a potential lover, and the two will compare notes to decide whether it’s a good match – completely unbeknown to their human owners.
As Cortanas gain authority, they may begin manipulating each other to further the interests of their masters, so that success in the job market or the marriage market may increasingly depend on the quality of your Cortana. Rich people owning the most up-to-date Cortana will have a decisive advantage over poor people with their older versions.
But the murkiest issue of all concerns the identity of Cortana’s master. As we have seen, humans are not individuals, and they don’t have a single unified self. Whose interests, then, should Cortana serve? Suppose my narrating self makes a New Year resolution to start a diet and go to the gym every day. A week later, when it is time for the gym, the experiencing self instructs Cortana to turn on the TV and order pizza. What should Cortana do? Should it obey the experiencing self, or the resolution taken a week earlier by the narrating self?
You may wonder whether Cortana is really different from an alarm clock, which the narrating self sets in the evening in order to wake the experiencing self in time for work. But Cortana will have far more power over me than an alarm clock. The experiencing self can silence the alarm clock by pressing a button. In contrast, Cortana will know me so well that it will know exactly what inner buttons to push in order to make me follow its ‘advice’.
Microsoft’s Cortana is not alone in this game. Google Now and Apple’s Siri are headed in the same direction. Amazon too employs algorithms that constantly study you and then use their accumulated knowledge to recommend products. When I go to a physical bookstore I wander among the shelves and trust my feelings to choose the right book. When I go to visit Amazon’s virtual shop, an algorithm immediately pops up and tells me: ‘I know which books you liked in the past. People with similar tastes also tend to love this or that new book.’
And this is just the beginning. Today in the US more people read digital books than printed ones. Devices such as Amazon’s Kindle are able to collect data on their users while they are reading. Your Kindle can, for example, monitor which parts of a book you read quickly, and which slowly; on which page you took a break, and on which sentence you abandoned the book, never to pick it up again. (Better tell the author to rewrite that bit.) If Kindle is upgraded with face recognition and biometric sensors, it will know how each sentence you read influenced your heart rate and blood pressure. It will know what made you laugh, what made you sad and what made you angry. Soon, books will read you while you are reading them. And whereas you quickly forget most of what you read, Amazon will never forget a thing. Such data will enable Amazon to choose books for you with uncanny precision. It will also enable Amazon to know exactly who you are, and how to turn you on and off.35
Eventually we may reach a point when it will be impossible to disconnect from this all-knowing network even for a moment. Disconnection will mean death. If medical hopes are realised, future humans will incorporate into their bodies a host of biometric devices, bionic organs and nano-robots, which will monitor our health and defend us from infections, illnesses and damage. Yet these devices will have to be online 24/7, both in order to be updated with the latest medical developments, and to protect them from the new plagues of cyberspace. Just as my home computer is constantly attacked by viruses, worms and Trojan horses, so will be my pacemaker, hearing aid and nanotech immune system. If I don’t update my body’s anti-virus program regularly, I will wake up one day to discover that the millions of nano-robots coursing through my veins are now controlled by a North Korean hacker.
The new technologies of the twenty-first century may thus reverse the humanist revolution, stripping humans of their authority, and empowering non-human algorithms instead. If you are horrified by this direction, don’t blame the computer geeks. The responsibility actually lies with the biologists. It is crucial to realise that this entire trend is fuelled more by biological insights than by computer science. It is the life sciences that concluded that organisms are algorithms. If this is not the case – if organisms function in an inherently different way to algorithms – then computers may work wonders in other fields, but they will not be able to understand us and direct our life, and they will certainly be incapable of merging with us. Yet once biologists concluded that organisms are algorithms, they dismantled the wall between the organic and inorganic, turned the computer revolution from a purely mechanical affair into a biological cataclysm, and shifted authority from individual humans to networked algorithms.
Some people are indeed horrified by this development, but the fact is that millions willingly embrace it. Already today many of us give up our privacy and our individuality by conducting much of our lives online, recording our every action and becoming hysterical if connection to the net is interrupted even for a few minutes. The shifting of authority from humans to algorithms is happening all around us, not as a result of some momentous governmental decision, but due to a flood of mundane personal choices.
If we are not careful the result might be an Orwellian police state that constantly monitors and controls not only all our actions, but even what happens inside our bodies and our brains. Just think what uses Stalin could have found for omnipresent biometric sensors – and what uses Putin might yet find for them. However, while defenders of human individuality fear a repetition of twentieth-century nightmares and brace themselves to resist familiar Orwellian foes, human individuality is now facing an even bigger threat from the opposite direction. In the twenty-first century the individual is more likely to disintegrate gently from within than to be brutally crushed from without. Today most corporations and governments pay homage to my individuality, and promise to provide medicine, education and entertainment customised to my unique needs and wishes. But in order to do so, corporations and governments first need to deconstruct me into biochemical subsystems, monitor these subsystems with ubiquitous sensors and decipher their working with powerful algorithms. In the process, the individual will transpire to be nothing but a religious fantasy. Reality will be a mesh of biochemical and electronic algorithms, without clear borders, and without individual hubs.
Upgrading Inequality
So far we have looked at two of the three practical threats to liberalism: firstly, that humans will lose their value completely; secondly, that humans will still be valuable collectively, but will lose their individual authority, and instead be managed by external algorithms. The system will still need you to compose symphonies, teach history or write computer code, but it will know you better than you know yourself, and will therefore make most of the important decisions for you – and you will be perfectly happy with that. It won’t necessarily be a bad world; it will, however, be a post-liberal world.
The third threat to liberalism is that some people will remain both indispensable and undecipherable, but they will constitute a small and privileged elite of upgraded humans. These superhumans will enjoy unheard-of abilities and unprecedented creativity, which will allow them to go on making many of the most important decisions in the world. They will perform crucial services for the system, while the system could neither understand nor manage them. However, most humans will not be upgraded, and will consequently become an inferior caste dominated by both computer algorithms and the new superhumans.
Splitting humankind into biological castes will destroy the foundations of liberal ideology. Liberalism can coexist with socio-economic gaps. Indeed, since it favours liberty over equality, it takes such gaps for granted. However, liberalism still presupposes that all human beings have equal value and authority. From a liberal perspective, it is perfectly all right that one person is a billionaire living in a sumptuous chateau, whereas another is a poor peasant living in a straw hut. For according to liberalism, the peasant’s unique experiences are still just as valuable as the billionaire’s. That’s why liberal authors write long novels about the experiences of poor peasants – and why even billionaires avidly read such books. If you go to see Les Misérables on Broadway or in Covent Garden, you will find that good seats can cost hundreds of dollars, and the audience’s combined wealth probably runs into the billions, yet they still sympathise with Jean Valjean who served nineteen years in jail for stealing a loaf of bread to feed his starving nephews.
The same logic operates on election day, when the vote of the poor peasant counts for exactly the same as the billionaire’s. The liberal solution for social inequality is to give equal value to different human experiences, instead of trying to create the same experiences for everyone. However, will this solution still work once rich and poor are separated not merely by wealth, but also by real biological gaps?
In her New York Times article, Angelina Jolie referred to the high costs of genetic testing. The test Jolie had taken costs $3,000 (not including the price of the actual mastectomy, the reconstructive surgery and related treatments). This in a world where 1 billion people earn less than $1 per day, and another 1.5 billion earn between $1 and $2 a day.36 Even if they work hard their entire life, these people will never be able to afford a $3,000 genetic test. And the economic gaps are at present only increasing. As of early 2016, the sixty-two richest people in the world were worth as much as the poorest 3.6 billion people! Since the world’s population is about 7.2 billion, it means that these sixty-two billionaires together hold as much wealth as the entire bottom half of humankind.37
The cost of DNA testing is likely to go down with time, but expensive new procedures are constantly being pioneered. So while old treatments will gradually come within reach of the masses, the elites will always remain a couple of steps ahead. Throughout history the rich have enjoyed many social and political advantages, but no huge biological gap ever separated them from the poor. Medieval aristocrats claimed that superior blue blood was flowing through their veins, and Hindu Brahmins insisted that they were naturally smarter than everyone else, but this was pure fiction. In the future, however, we may see real gaps in physical and cognitive abilities opening between an upgraded upper class and the rest of society.
When scientists are confronted with this scenario, their standard reply is that in the twentieth century too many medical breakthroughs began with the rich, but eventually benefited the entire population and helped to narrow rather than widen the social gaps. For example, vaccines and antibiotics at first profited mainly the upper classes in Western countries, but today they improve the lives of all humans everywhere.
However, the expectation that this process will be repeated in the twenty-first century may be just wishful thinking, for two important reasons. First, medicine is undergoing a tremendous conceptual revolution. Twentieth-century medicine aimed to heal the sick. Twenty-first-century medicine is increasingly aiming to upgrade the healthy. Healing the sick was an egalitarian project, because it assumed that there is a normative standard of physical and mental health that everyone can and should enjoy. If someone fell below the norm, it was the job of doctors to fix the problem and help him or her ‘be like everyone’. In contrast, upgrading the healthy is an elitist project, because it rejects the idea of a universal standard applicable to all and seeks to give some individuals an edge over others. People want superior memories, above-average intelligence and first-class sexual abilities. If some form of upgrade becomes so cheap and common that everyone enjoys it, it will simply be considered the new baseline, which the next generation of treatments will strive to surpass.
Consequently by 2070 the poor could very well enjoy much better healthcare than today, but the gap separating them from the rich will nevertheless be much greater. People usually compare themselves to their more fortunate contemporaries rather than to their ill-fated ancestors. If you tell a poor American in a Detroit slum that he has access to much better healthcare than his great-grandparents did a century ago, it is unlikely to cheer him up. Indeed, such talk will sound terribly smug and condescending. ‘Why should I compare myself to nineteenth-century factory workers or peasants?’ he would retort. ‘I want to live like the rich people on television, or at least like the folks in the affluent suburbs’. Similarly, if in 2070 you tell the lower classes that they enjoy better healthcare than in 2017, it might be very cold comfort to them, because they would be comparing themselves to the upgraded superhumans who dominate the world.
Moreover, despite all the medical breakthroughs we cannot be absolutely certain that in 2070 the poor will indeed enjoy better healthcare than today, because the state and the elite may lose interest in providing the poor with healthcare. In the twentieth-century medicine benefited the masses because the twentieth century was the age of the masses. Twentieth-century armies needed millions of healthy soldiers, and economies needed millions of healthy workers. Consequently states established public health services to ensure the health and vigour of everyone. Our greatest medical achievements were the provision of mass-hygiene facilities, the campaigns of mass vaccinations and the eradication of mass epidemics. In 1914 the Japanese elite had a vested interest in vaccinating the poor and building hospitals and sewage systems in the slums, because if they wanted Japan to be a strong nation with a powerful army and a robust economy, they needed many millions of healthy soldiers and workers.
But the age of the masses may be over, and with it the age of mass medicine. As human soldiers and workers give way to algorithms, at least some elites may conclude that there is no point in providing improved or even standard levels of health for masses of useless poor people, and it is far more sensible to focus on upgrading a handful of superhumans beyond the norm.
Already today the birth rate is falling in technologically advanced countries such as Japan and South Korea, where prodigious efforts are invested in the upbringing and education of fewer and fewer children – from whom more and more is expected. How can huge developing countries like India, Brazil or Nigeria hope to compete with Japan? These countries resemble a long train. The elites in the first-class carriages enjoy health care, education and income levels on a par with the most developed nations in the world. However, the hundreds of millions of ordinary citizens who crowd the third-class carriages still suffer from widespread disease, ignorance and poverty. What would the Indian, Brazilian or Nigerian elites prefer to do in the coming century? Invest in fixing the problems of hundreds of millions of poor, or in upgrading a few million rich? Unlike in the twentieth century, when the elite had a stake in fixing the problems of the poor because they were militarily and economically vital, in the twenty-first century the most efficient (albeit ruthless) strategy might be to let go of the useless third-class carriages, and dash forward with the first class only. In order to compete with Japan, Brazil might need a handful of upgraded superhumans far more than millions of healthy ordinary workers.
How will liberal beliefs survive the appearance of superhumans with exceptional physical, emotional and intellectual abilities? What will happen if it turns out that such superhumans have fundamentally different experiences from normal Sapiens? What if superhumans are bored by novels about the experiences of lowly Sapiens thieves, whereas run-of-the-mill humans find soap operas about superhuman love affairs unintelligible?
The great human projects of the twentieth century – overcoming famine, plague and war – aimed to safeguard a universal norm of abundance, health and peace for everyone without exception. The new projects of the twenty-first century – gaining immortality, bliss and divinity – also hope to serve the whole of humankind. However, because these projects aim at surpassing rather than safeguarding the norm, they may well result in the creation of a new superhuman caste that will abandon its liberal roots and treat normal humans no better than nineteenth-century Europeans treated Africans.
If scientific discoveries and technological developments split humankind into a mass of useless humans and a small elite of upgraded superhumans, or if authority shifts altogether away from human beings into the hands of highly intelligent algorithms, then liberalism will collapse. What new religions or ideologies might fill the resulting vacuum and guide the subsequent evolution of our godlike descendants?
The new religions are unlikely to emerge from the caves of Afghanistan or from the madrasas of the Middle East. Rather, they will emerge from research laboratories. Just as socialism took over the world by promising salvation through steam and electricity, so in the coming decades new techno-religions may conquer the world by promising salvation through algorithms and genes.
Despite all the talk of radical Islam and Christian fundamentalism, the most interesting place in the world from a religious perspective is not the Islamic State or the Bible Belt, but Silicon Valley. That’s where hi-tech gurus are brewing for us brave new religions that have little to do with God, and everything to do with technology. They promise all the old prizes – happiness, peace, prosperity and even eternal life – but here on earth with the help of technology, rather than after death with the help of celestial beings.
These new techno-religions can be divided into two main types: techno-humanism and data religion. Data religion argues that humans have completed their cosmic task and should now pass the torch on to entirely new kinds of entities. We will discuss the dreams and nightmares of data religion in the next chapter. This chapter is dedicated to the more conservative creed of techno-humanism, which still sees humans as the apex of creation and clings to many traditional humanist values. Techno-humanism agrees that Homo sapiens as we know it has run its historical course and will no longer be relevant in the future, but concludes that we should therefore use technology in order to create Homo deus – a much superior human model. Homo deus will retain some essential human features, but will also enjoy upgraded physical and mental abilities that will enable it to hold its own even against the most sophisticated non-conscious algorithms. Since intelligence is decoupling from consciousness, and since non-conscious intelligence is developing at breakneck speed, humans must actively upgrade their minds if they want to stay in the game.
Seventy thousand years ago the Cognitive Revolution transformed the Sapiens mind, thereby turning an insignificant African ape into the ruler of the world. The improved Sapiens minds suddenly had access to the vast intersubjective realm, which enabled them to create gods and corporations, to build cities and empires, to invent writing and money, and eventually to split the atom and reach the moon. As far as we know, this earth-shattering revolution resulted from a few small changes in the Sapiens DNA and a slight rewiring of the Sapiens brain. If so, says techno-humanism, maybe a few additional changes to our genome and another rewiring of our brain will suffice to launch a second cognitive revolution. The mental renovations of the first Cognitive Revolution gave Homo sapiens access to the intersubjective realm and turned them into the rulers of the planet; a second cognitive revolution might give Homo deus access to unimaginable new realms and make them lords of the galaxy.
This idea is an updated variant on the old dreams of evolutionary humanism, which already a century ago called for the creation of superhumans. However, whereas Hitler and his ilk planned to create superhumans by means of selective breeding and ethnic cleansing, twenty-first-century techno-humanism hopes to reach that goal far more peacefully, with the help of genetic engineering, nanotechnology and brain–computer interfaces.
Gap the Mind
Techno-humanism seeks to upgrade the human mind and give us access to unknown experiences and unfamiliar states of consciousness. However, revamping the human mind is an extremely complex and dangerous undertaking. As we discussed in Chapter 3, we don’t really understand the mind. We don’t know how minds emerge, nor what their function is. Through trial and error we are learning how to engineer mental states, but we seldom comprehend the full implications of such manipulations. Worse still, since we are unfamiliar with the full spectrum of mental states, we don’t know what mental aims to set ourselves.
We are akin to the inhabitants of a small isolated island who have just invented the first boat, and are about to set sail without a map or even a destination. Indeed, we are in a somewhat worse condition. The inhabitants of our imaginary island are at least aware that they occupy just a small space within a large and mysterious sea. We on the other hand fail to appreciate that we are living on a tiny island of consciousness within a perhaps limitless ocean of alien mental states.
Just as the spectrums of light and sound are far broader than what we humans can see and hear, so the spectrum of mental states is far larger than what the average human perceives. We can see light in wavelengths of between 400 and 700 nanometres only. Above this small principality of human vision extend the unseen but vast realms of infrared, microwaves and radio waves, and below it lie the dark dominions of ultraviolet, X-rays and gamma rays. Similarly, the spectrum of possible mental states may be infinite, but science has studied only two tiny sections of it: the sub-normative and the WEIRD.
For more than a century psychologists and biologists have conducted extensive research on people suffering from various psychiatric disorders and mental diseases, from autism to schizophrenia. Consequently, today we have a detailed albeit imperfect map of the sub-normative mental spectrum: the zone of human existence characterized by less-than-normal capacities to feel, think or communicate. Simultaneously, scientists have studied the mental states of people considered to be healthy and normative. However, most scientific research about the human mind and the human experience has been conducted on people from Western, educated, industrialised, rich and democratic (WEIRD) societies, who do not constitute a representative sample of humanity. The study of the human mind has so far assumed that Homo sapiens is Homer Simpson.
In a groundbreaking 2010 study, Joseph Henrich, Steven J. Heine and Ara Norenzayan systematically surveyed all the papers published between 2003 and 2007 in leading scientific journals belonging to six different subfields of psychology. They found that though the papers often made broad claims about the human mind, most of them based their findings on exclusively WEIRD samples. For example, in papers published in the Journal of Personality and Social Psychology – arguably the most important journal in the subfield of social psychology – 96 per cent of the sampled individuals were WEIRD, and 68 per cent were Americans. Moreover, 67 per cent of American subjects and 80 per cent of non-American subjects were psychology students! In other words, more than two-thirds of the individuals sampled for papers published in this prestigious journal were psychology students in Western universities. Henrich, Heine and Norenzayan half-jokingly suggested that the journal change its name to the Journal of Personality and Social Psychology of American Psychology Students.1
46. Humans can see only a minuscule part of the electromagnetic spectrum. The spectrum in its entirety is about 10 trillion times larger than that of visible light. Might the mental spectrum be equally vast?
46. ‘EM spectrum’. Licensed under CC BY-SA 3.0 via Commons, https://commons.wikimedia.org/wiki/File:EM_spectrum.svg#/media/File:EM_spectrum.svg.
Psychology students star in many of the studies because their professors oblige them to take part in experiments. If I am a psychology professor at Harvard it is much easier for me to conduct experiments on my own students than on the residents of a crime-ridden Boston slum – not to mention travelling to Namibia and enlisting hunter-gatherers in the Kalahari Desert. However, it may well be that Boston slum-dwellers and Kalahari hunter-gatherers experience mental states that we will never discover by forcing Harvard psychology students to answer long questionnaires or stick their heads into fMRI scanners.
Even if we travel all over the globe and study each and every community, we would still cover only a limited part of the Sapiens mental spectrum. Nowadays all humans have been touched by modernity, and are members of a single global village. Though Kalahari foragers are somewhat less modern than Harvard psychology students, they are not a time capsule from our distant past. They too have been influenced by Christian missionaries, European traders, wealthy eco-tourists and inquisitive researchers (the joke is that in the Kalahari Desert, the typical hunter-gatherer band consists of twenty hunters, twenty gatherers and fifty anthropologists).
Before the emergence of the global village the planet was a galaxy of isolated human cultures, which might have fostered mental states that are now extinct. Different socio-economic realities and daily routines nurtured different states of consciousness. Who can gauge the minds of Stone Age mammoth-hunters, Neolithic farmers or Kamakura samurais? Moreover, many premodern cultures believed in the existence of superior states of consciousness that people might access through meditation, drugs or rituals. Shamans, monks and ascetics systematically explored the mysterious lands of mind, and returned laden with breathtaking stories. They told of unfamiliar states of supreme tranquillity, extreme sharpness and matchless sensitivity. They told of the mind expanding to infinity or dissolving into emptiness.
The humanist revolution caused modern Western culture to lose faith and interest in superior mental states, and to sanctify the mundane experiences of the average Joe. Modern Western culture is therefore unique in lacking a specialised class of people who seek to experience extraordinary mental states. It believes anyone attempting to do so is a drug addict, mental patient or charlatan. Consequently, though we have a detailed map of the mental landscape of Harvard psychology students, we know far less about the mental landscapes of Native American shamans, Buddhist monks or Sufi mystics.2
And that is just the Sapiens mind. Fifty thousand years ago we shared this planet with our Neanderthal cousins. They didn’t launch spaceships, build pyramids or establish empires. They obviously had very different mental abilities and lacked many of our talents. Nevertheless, they had bigger brains than us Sapiens. What exactly did they do with all those neurons? We have absolutely no idea. But they might well have had many mental states that no Sapiens has ever experienced.
Yet even if we take into account all human species that ever existed, that would not come close to exhausting the mental spectrum. Other animals probably have experiences that we humans can barely imagine. Bats, for example, experience the world through echolocation. They emit a very rapid stream of high-frequency chirps, well beyond the range of the human ear. They then detect and interpret the returning echoes to build a picture of the world. That picture is so detailed and accurate that bats can fly quickly between trees and buildings, chase and capture moths and mosquitoes, and all the while evade owls and other predators.
Bats live in a world of echoes. Just as in the human world every object has a characteristic shape and colour, so in the bat world every object has its echo-pattern. A bat can distinguish between a tasty moth species and a poisonous moth species by the different echoes bouncing back from their delicate wings. Some edible moth species try to protect themselves by evolving an echo-pattern similar to that of a poisonous species. Others have evolved an even more remarkable ability to deflect the waves of the bat radar, so like stealth bombers they can fly around without the bat knowing they are there. The world of echolocation is as complex and stormy as our familiar world of sound and sight, but we are completely oblivious to it.
One of the most important articles about the philosophy of mind is titled ‘What Is It Like to Be a Bat?’3 In this 1974 paper, the philosopher Thomas Nagel points out that a Sapiens mind cannot fathom the subjective world of a bat. We can write all the algorithms we want about the bat body, bat echolocation systems and bat neurons, but that won’t tell us how it feels to be a bat. How does it feel to echolocate a moth flapping its wings? Is it similar to seeing it, or is it something completely different?
Trying to explain to a Sapiens how it feels to echolocate a butterfly is probably as pointless as explaining to a blind mole how it feels to see a Caravaggio. It’s likely that bat emotions are also deeply influenced by the centrality of their echolocation sense. For Sapiens, love is red, envy is green and depression is blue. Who knows what echolocations colour the love of a female bat for her offspring, or the feelings of a male bat towards his rivals?
Bats aren’t special, of course. They are but one of countless possible examples. Just as Sapiens cannot understand what it’s like to be a bat, we have similar difficulties understanding how it feels to be a whale, a tiger or a pelican. It certainly must feel like something; but we don’t know like what. Both whales and humans process emotions in a part of the brain called the limbic system, yet the whale limbic system includes an entire additional part that is missing from the human structure. Maybe that part enables whales to experience extremely deep and complex emotions that are alien to us? Whales might also have astounding musical experiences that even Bach and Mozart couldn’t grasp. Whales can hear one another from hundreds of miles away, and each whale has a repertoire of characteristic ‘songs’ that may last for hours and follow very intricate patterns. Every now and then a whale composes a new hit, which other whales throughout the ocean adopt. Scientists routinely record these hits and analyse them with the help of computers, but can any human fathom these musical experiences and tell the difference between a whale Beethoven and a whale Justin Bieber?4
47. A spectrogram of a bowhead whale song. How does a whale experience this song? The Voyager record included a whale song in addition to Beethoven, Bach and Chuck Berry. We can only hope it is a good one.
47. © Cornell Bioacoustics Research Program at the Lab of Ornithology.
None of this should surprise us. Sapiens don’t rule the world because they have deeper emotions or more complex musical experiences than other animals. So we may be inferior to whales, bats, tigers and pelicans at least in some emotional and experiential domains.
Beyond the mental spectrum of humans, bats, whales and all other animals, even vaster and stranger continents may lie in wait. In all probability there is an infinite variety of mental states that no Sapiens, bat or dinosaur ever experienced in 4 billion years of terrestrial evolution, because they did not have the necessary faculties. In the future, however, powerful drugs, genetic engineering, electronic helmets and direct brain–computer interfaces may open passages to these places. Just as Columbus and Magellan sailed beyond the horizon to explore new islands and unknown continents, so we may one day embark for the antipodes of the mind.
48. The spectrum of consciousness.
48. Illustration: the spectrum of conciousness.
I Smell Fear
As long as doctors, engineers and customers focused on healing mental diseases and enjoying life in WEIRD societies, the study of subnormal mental states and WEIRD minds was perhaps sufficient to our needs. Though normative psychology is often accused of mistreating any divergence from the norm, in the last century it has brought relief to countless people, saving the lives and sanity of millions.
However, at the beginning of the third millennium we face a completely different kind of challenge, as liberal humanism makes way for techno-humanism, and medicine is increasingly focused on upgrading the healthy rather than healing the sick. Doctors, engineers and customers no longer want merely to fix mental problems – they are now seeking to upgrade the mind. We are acquiring the technical abilities to begin manufacturing new states of consciousness, yet we lack a map of these potential new territories. Since we are familiar mainly with the normative and sub-normative mental spectrum of WEIRD people, we don’t even know what destinations to aim towards.
Not surprisingly then, positive psychology has become the trendiest subfield of the discipline. In the 1990s leading experts such as Martin Seligman, Ed Dinner and Mihaly Csikszentmihalyi argued that psychology should study not just mental illnesses, but also mental strengths. How come we have a remarkably detailed atlas of the sick mind, but no scientific map of the prosperous mind? Over the last two decades, positive psychology has made important first steps in the study of super-normative mental states, but as of 2016, the super-normative zone is largely terra incognita to science.
Under such circumstances, we might rush forward without any map, and focus on upgrading those mental abilities that the current economic and political system needs, while neglecting and even downgrading others. Of course, this is not a completely new phenomenon. For thousands of years the system has been shaping and reshaping our minds according to its needs. Sapiens originally evolved as members of small intimate communities, and their mental faculties were not adapted to living as cogs within a giant machine. However, with the rise of cities, kingdoms and empires, the system cultivated capacities required for large-scale cooperation, while disregarding other skills and aptitudes.
For example, archaic humans probably made extensive use of their sense of smell. Hunter-gatherers are able to smell from a distance the difference between various animal species, various humans and even various emotions. Fear, for example, smells different from courage. When a man is afraid he secretes different chemicals compared to when he is full of courage. If you sat among an archaic band debating whether to start a war against the neighbours, you could literally smell public opinion.
As Sapiens organised themselves into larger groups, noses lost much of their social importance, because they are useful only when dealing with small numbers of individuals. You cannot, for example, smell the American fear of China. Consequently, human olfactory powers were neglected. Brain areas that tens of thousands of years ago probably dealt with odours were put to work on more urgent tasks such as reading, mathematics and abstract reasoning. The system prefers that our neurons solve differential equations rather than smell our neighbours.5
The same thing happened to our other senses and to the underlying ability to pay attention to our sensations. Ancient foragers were always alert and attentive. Wandering in the forest in search of mushrooms, they sniffed the wind carefully and watched the ground intently. When they found a mushroom, they ate it with the utmost attention, aware of every little nuance of flavour, which could distinguish an edible mushroom from its poisonous cousin. Members of today’s affluent societies don’t need such keen awareness. We can go to the supermarket and buy any of a thousand different dishes, all supervised by the health authorities. But whatever we choose – Italian pizza or Thai noodles – we are likely to eat it in haste in front of the TV, hardly paying attention to the taste (which is why food producers are constantly inventing exciting new flavours that might somehow pierce our curtain of indifference). Similarly, thanks to good transport services we can easily meet a friend who lives across town. But even when together we seldom give this friend our undivided attention because we constantly check our smartphone and our Facebook account, convinced that something far more interesting is probably happening elsewhere. Modern humanity is sick with FOMO – Fear Of Missing Out – and though we have more choice than ever before, we have lost the ability to really pay attention to whatever we choose.6
In addition to smelling and paying attention, we have also been losing our ability to dream. Many cultures believed that what people see and do in their dreams is no less important than what they see and do while awake. Hence people actively developed their ability to dream, to remember dreams and even to control their actions in the dream world, which is known as ‘lucid dreaming’. Experts in lucid dreaming could move about the dream world at will, and claimed they could even travel to higher planes of existence or meet visitors from other worlds. The modern world, in contrast, dismisses dreams as subconscious messages at best, and mental garbage at worst. Consequently, dreams play a much smaller part in our lives, few people actively develop their dreaming skills, and many people claim that they don’t dream at all, or that they cannot remember any of their dreams.7
Did the decline in our capacity to smell, pay attention and dream make our lives poorer and greyer? Maybe. But even if it did, for the economic and political system it was worth it. Your boss wants you to constantly check your emails rather than smell flowers or dream about fairies. For similar reasons, it is likely that future upgrades to the human mind will reflect political needs and market forces.
For example, the US army’s ‘attention helmet’ is meant to help people focus on well-defined tasks and speed up their decision-making process. It may, however, reduce their ability to show empathy and tolerate doubts and inner conflicts. Humanist psychologists have pointed out that people in distress often don’t want a quick fix – they want somebody to listen to them and sympathise with their fears and misgivings. Suppose you are having an ongoing crisis in your workplace, because your new boss doesn’t appreciate your views, and insists on doing everything her way. After one particularly unhappy day, you pick up the phone and call a friend. But the friend has little time and energy for you, so he cuts you short, and tries to solve your problem: ‘Okay. I get it. Well, you really have just two options here: either quit the job, or stay and do what the boss wants. And if I were you, I would quit.’ That would hardly help. A really good friend would have patience, and not be so quick to find a solution. He would listen to your distress, and allow time and space for all your contradictory emotions and gnawing anxieties to surface.
The attention helmet works a bit like the impatient friend. Of course sometimes – on the battlefield, for instance – people need to take firm decisions quickly. But there is more to life than that. If we start using the attention helmet in more and more situations, we may end up losing our ability to tolerate confusion, doubts and contradictions, just as we have lost our ability to smell, dream and pay attention. The system may push us in that direction, because it usually rewards us for the decisions we make rather than for our doubts. Yet a life of resolute decisions and quick fixes may be poorer and shallower than one of doubts and contradictions.
When we mix a practical ability to engineer minds with our ignorance of the mental spectrum and with the narrow interests of governments, armies and corporations, we get a recipe for trouble. We may successfully upgrade our bodies and our brains, while losing our minds in the process. Indeed, techno-humanism may end up downgrading humans. The system may prefer downgraded humans not because they would possess any superhuman knacks, but because they would lack some really disturbing human qualities that hamper the system and slow it down. As any farmer knows, it’s usually the brightest goat in the herd that stirs up the most trouble, which is why the Agricultural Revolution involved downgrading animals’ mental abilities. The second cognitive revolution, dreamed up by techno-humanists, might do the same to us, producing human cogs who communicate and process data far more effectively than ever before, but who can barely pay attention, dream or doubt. For millions of years we were enhanced chimpanzees. In the future, we may become oversized ants.
The Nail on Which the Universe Hangs
Techno-humanism faces another dire threat. Like all humanist sects, techno-humanism too sanctifies the human will, seeing it as the nail on which the entire universe hangs. Techno-humanism expects our desires to choose which mental abilities to develop and thereby determine the shape of future minds. Yet what will happen once technological progress makes it possible to reshape and engineer those desires?
Humanism always emphasised that it is not easy to identify our authentic will. When we try to listen to ourselves, we are often flooded by a cacophony of conflicting noises. Indeed, we sometimes don’t really want to hear our authentic voice, because it might disclose unwelcome secrets and make uncomfortable requests. Many people take great care not to probe themselves too deeply. A successful lawyer on the fast track may stifle an inner voice telling her to take a break and have a child. A woman trapped in a dissatisfying marriage fears losing the security it provides. A guilt-ridden soldier is stalked by nightmares about atrocities he committed. A young man unsure of his sexuality follows a personal ‘don’t ask, don’t tell’ policy. Humanism doesn’t think any of these situations has an obvious one-size-fits-all solution. But humanism demands that we show some guts, listen to the inner messages even if they scare us, identify our authentic voice and then follow its instructions regardless of the difficulties.
Technological progress has a very different agenda. It doesn’t want to listen to our inner voices. It wants to control them. Once we understand the biochemical system producing all these voices, we can play with the switches, turn up the volume here, lower it there, and make life much more easy and comfortable. We’ll give Ritalin to the distracted lawyer, Prozac to the guilty soldier and Cipralex to the dissatisfied wife. And that’s just the beginning.
Humanists are often appalled by this approach, but we had better not pass judgement on it too quickly. The humanist recommendation to listen to ourselves has ruined the lives of many a person, whereas the right dosage of the right chemical has greatly improved the well-being and relationships of millions. In order to really listen to themselves, some people must first turn down the volume of the inner screams and diatribes. According to modern psychiatry, many ‘inner voices’ and ‘authentic wishes’ are nothing more than the product of biochemical imbalances and neurological diseases. People suffering from clinical depression repeatedly walk out on promising careers and healthy relationships because some biochemical glitch makes them see everything through dark-coloured lenses. Instead of listening to such destructive inner voices, it might be a good idea to shut them up. When Sally Adee used the attention helmet to silence the voices in her head, she not only became an expert markswoman, but she also felt much better about herself.
Personally, each of us may have a different view about these issues. Yet from a historical perspective it is clear that something momentous is happening. The number one humanist commandment – listen to yourself! – is no longer self-evident. As we learn to turn our inner volume up and down, we give up our belief in authenticity, because it is no longer clear whose hand is on the switch. Silencing annoying noises inside my head seems like a wonderful idea, provided it enables me to finally hear my deep authentic self. But if there is no authentic self, how do I decide which voices to silence and which to amplify?
Let’s assume, just for the sake of argument, that within a few decades brain scientists will grant us easy and accurate control over many inner voices. Imagine a young gay man from a devout Mormon family, who, after years of living in the closet, has finally accumulated enough money to finance a passion operation. He goes to the clinic armed with $100,000, determined to walk out as straight as Joseph Smith. Standing in front of the clinic’s door he mentally repeats what he intends to say to the doctor: ‘Doc, here’s $100,000. Please fix me so that I will never want men again.’ He then rings the bell, and the door is opened by a real-life George Clooney. ‘Doc,’ mumbles the overwhelmed lad, ‘here’s $100,000. Please fix me so that I will never want to be straight again.’
Did the young man’s authentic self win over the religious brainwashing he underwent? Or did a moment’s temptation cause him to betray himself? Or perhaps there is simply no such thing as an authentic self that we can follow or betray? Once we can design and redesign our will, we could no longer see it as the ultimate source of all meaning and authority. For no matter what our will says, we can always make it say something else.
According to humanism, only human desires imbue the world with meaning. Yet if we could choose our desires, on what basis could we possibly make such choices? Suppose Romeo and Juliet opened with Romeo having to decide with whom to fall in love. And suppose even after making a decision, Romeo could always retract and make a different choice instead. What kind of play would it have been? Well, that’s the play technological progress is trying to produce for us. When our desires make us uncomfortable, technology promises to bail us out. When the nail on which the entire universe hangs is pegged in a problematic spot, technology will pull it out and stick it somewhere else. But where exactly? If I could peg that nail anywhere in the cosmos, where should I peg it, and why there of all places?
Humanist dramas unfold when people have uncomfortable desires. For example, it is extremely uncomfortable when Romeo of the house of Montague falls in love with Juliet of the house of Capulet, because the Montagues and Capulets are bitter enemies. The technological solution to such dramas is to ensure we never have uncomfortable desires. How much pain and sorrow would have been avoided if, instead of drinking poison, Romeo and Juliet could just take a pill or wear a helmet that would have redirected their star-crossed love towards other people.
Techno-humanism faces an impossible dilemma here. It considers the human will to be the most important thing in the universe, hence it pushes humankind to develop technologies that can control and redesign the will. After all, it’s tempting to gain control over the most important thing in the world. Yet should we ever achieve such control, techno-humanism would not know what to do with it, because the sacred human would then become just another designer product. We can never deal with such technologies as long as we believe that the human will and the human experience are the supreme source of authority and meaning.
Hence a bolder techno-religion seeks to sever the humanist umbilical cord altogether. It foresees a world that does not revolve around the desires and experiences of any humanlike beings. What might replace desires and experiences as the source of all meaning and authority? As of 2016, there is one candidate sitting in history’s reception room waiting for the job interview. This candidate is information. The most interesting emerging religion is Dataism, which venerates neither gods nor man – it worships data.
Dataism declares that the universe consists of data flows, and the value of any phenomenon or entity is determined by its contribution to data processing.1 This may strike you as some eccentric fringe notion, but in fact it has already conquered most of the scientific establishment. Dataism was born from the explosive confluence of two scientific tidal waves. In the 150 years since Charles Darwin published On the Origin of Species, the life sciences have come to see organisms as biochemical algorithms. Simultaneously, in the eight decades since Alan Turing formulated the idea of a Turing Machine, computer scientists have learned to engineer increasingly sophisticated electronic algorithms. Dataism puts the two together, pointing out that exactly the same mathematical laws apply to both biochemical and electronic algorithms. Dataism thereby collapses the barrier between animals and machines, and expects electronic algorithms to eventually decipher and outperform biochemical algorithms.
For politicians, business people and ordinary consumers, Dataism offers groundbreaking technologies and immense new powers. For scholars and intellectuals it also promises to provide the scientific holy grail that has eluded us for centuries: a single overarching theory that unifies all the scientific disciplines from musicology through economics to biology. According to Dataism, Beethoven’s Fifth Symphony, a stock-exchange bubble and the flu virus are just three patterns of data flow that can be analysed using the same basic concepts and tools. This idea is extremely attractive. It gives all scientists a common language, builds bridges over academic rifts and easily exports insights across disciplinary borders. Musicologists, economists and cell biologists can finally understand each other.
In the process Dataism inverts the traditional pyramid of learning. Hitherto, data was seen as only the first step in a long chain of intellectual activity. Humans were supposed to distil data into information, information into knowledge, and knowledge into wisdom. However, Dataists believe that humans can no longer cope with the immense flows of data, hence they cannot distil data into information, let alone into knowledge or wisdom. The work of processing data should therefore be entrusted to electronic algorithms, whose capacity far exceeds that of the human brain. In practice, this means that Dataists are sceptical about human knowledge and wisdom, and prefer to put their trust in Big Data and computer algorithms.
Dataism is most firmly entrenched in its two mother disciplines: computer science and biology. Of the two biology is the more important. It was biology’s embrace of Dataism that turned a limited breakthrough in computer science into a world-shattering cataclysm that may completely transform the very nature of life. You may not agree with the idea that organisms are algorithms, and that giraffes, tomatoes and human beings are just different methods for processing data. But you should know that this is current scientific dogma, and it is changing our world beyond recognition.
Not only individual organisms are seen today as data-processing systems, but also entire societies such as beehives, bacteria colonies, forests and human cities. Economists increasingly interpret the economy too as a data-processing system. Laypeople believe that the economy consists of peasants growing wheat, workers manufacturing clothes, and customers buying bread and underpants. Yet experts see the economy as a mechanism for gathering data about desires and abilities, and turning this data into decisions.
According to this view, free-market capitalism and state-controlled communism aren’t competing ideologies, ethical creeds or political institutions. They are, in essence, competing data-processing systems. Capitalism uses distributed processing, whereas communism relies on centralised processing. Capitalism processes data by directly connecting all producers and consumers to one another and allowing them to exchange information freely and make decisions independently. How do you determine the price of bread in a free market? Well, every bakery may produce as much bread as it likes, and charge for it as much as it wants. The customers are equally free to buy as much bread as they can afford, or take their business to a competitor. It isn’t illegal to charge $1,000 for a baguette, but nobody is likely to buy it.
On a much grander scale, if investors predict increased demand for bread, they will buy shares of biotech firms that genetically engineer more prolific wheat strains. The influx of capital will enable the firms to speed up their research, thereby providing more wheat faster, and averting bread shortages. Even if one biotech giant adopts a flawed theory and reaches an impasse, its more successful competitors will likely achieve the hoped-for breakthrough. Free-market capitalism thus distributes the work of analysing data and making decisions between many independent but interconnected processors. As the Austrian economics guru Friedrich Hayek explained, ‘In a system in which the knowledge of the relevant facts is dispersed among many people, prices can act to coordinate the separate actions of different people.’2
According to this view the stock exchange is the fastest and most efficient data-processing system humankind has so far created. Everyone is welcome to join, if not directly then through their banks or pension funds. The stock exchange runs the global economy, and takes into account everything that happens all over the planet – and even beyond it. Prices are influenced by successful scientific experiments, by political scandals in Japan, by volcanic eruptions in Iceland and even by irregular activities on the surface of the sun. In order for the system to run smoothly, as much information as possible needs to flow as freely as possible. When millions of people throughout the world have access to all the relevant information, they determine the most accurate price of oil, of Hyundai shares and of Swedish government bonds by buying and selling them. It has been estimated that the stock exchange needs just fifteen minutes of trade to determine the influence of a New York Times headline on the prices of most shares.3
Data-processing considerations also explain why capitalists favour lower taxes. Heavy taxation means that a large part of all available capital accumulates in one place – the state coffers – and consequently more and more decisions have to be made by a single processor, namely the government. This creates an overly centralised data-processing system. In extreme cases, when taxes are exceedingly high, almost all capital ends up in the government’s hands, and so the government alone calls the shots. It dictates the price of bread, the location of bakeries, and the research-and-development budget. In a free market, if one processor makes a wrong decision, others will be quick to capitalise on its mistake. However, when a single processor makes almost all the decisions, mistakes can be catastrophic.
This extreme situation, in which all data is processed and all decisions are made by a single central processor, is called communism. In a communist economy people allegedly work according to their abilities and receive according to their needs. In other words, the government takes 100 per cent of your profits, decides what you need and then supplies these needs. Though no country ever realised this scheme in its extreme form, the Soviet Union and its satellites came as close as they could. They abandoned the principle of distributed data processing and switched to a model of centralised data processing. All information from throughout the Soviet Union flowed to a single location in Moscow where all the important decisions were made. Producers and consumers could not communicate directly and had to obey government orders.
49. The Soviet leadership in Moscow, 1963: centralised data processing.
49. © ITAR-TASS Photo Agency/Alamy Stock Photo.
For instance, the Soviet economics ministry might decide that the price of bread in all shops should be exactly two roubles and four kopeks, that a particular kolkhoz in the Odessa oblast should switch from growing wheat to raising chickens, and that the Red October bakery in Moscow should produce 3.5 million loaves of bread per day and not a single loaf more. Meanwhile the Soviet science ministry forced all Soviet biotech laboratories to adopt the theories of Trofim Lysenko – the infamous head of the Lenin Academy for Agricultural Sciences. Lysenko rejected the dominant genetic theories of his day. He insisted that if an organism acquired some new trait during its lifetime, this quality could pass directly to its descendants. This idea flew in the face of Darwinian orthodoxy, but it dovetailed nicely with communist educational principles. It implied that if you could train wheat plants to withstand cold weather, their progenies will also be cold-resistant. Lysenko accordingly sent billions of counter-revolutionary wheat plants to be re-educated in Siberia – and the Soviet Union was soon forced to import more and more flour from the United States.4
50. Commotion on the floor of the Chicago Board of Trade: distributed data processing.
50. © Jonathan Kirn/Getty Images.
Capitalism did not defeat communism because capitalism was more ethical, because individual liberties are sacred or because God was angry with the heathen communists. Rather, capitalism won the Cold War because distributed data processing works better than centralised data processing, at least in periods of accelerating technological change. The central committee of the Communist Party just could not deal with the rapidly changing world of the late twentieth century. When all data is accumulated in one secret bunker, and all important decisions are taken by a group of elderly apparatchiks, they can produce nuclear bombs by the cartload, but not an Apple or a Wikipedia.
There is a story (probably apocryphal, like most good stories) that when Mikhail Gorbachev tried to resuscitate the moribund Soviet economy, he sent one of his chief aides to London to find out what Thatcherism was all about, and how a capitalist system actually functioned. The hosts took their Soviet visitor on a tour of the City, of the London stock exchange and of the London School of Economics, where he had lengthy talks with bank managers, entrepreneurs and professors. After many long hours the Soviet expert burst out: ‘Just one moment, please. Forget about all these complicated economic theories. We have been going back and forth across London for a whole day now, and there’s one thing I cannot understand. Back in Moscow our finest minds are working on the bread supply system, and yet there are such long queues in every bakery and grocery store. Here in London live millions of people, and we have passed today in front of many shops and supermarkets, yet I haven’t seen a single bread queue. Please take me to meet the person in charge of supplying bread to London. I must learn his secret.’ The hosts scratched their heads, thought for a moment, and said: ‘Nobody is in charge of supplying bread to London.’
That’s the capitalist secret of success. No central processing unit monopolises all the data on the London bread supply. The information flows freely among millions of consumers and producers, bakers and tycoons, farmers and scientists. Market forces determine the price of bread, the number of loaves baked each day and the research-and-development priorities. If market forces make a bad decision, they soon correct themselves, or so capitalists believe. For our current purposes it doesn’t matter whether this capitalist theory is correct. The crucial thing is that the theory understands economics in terms of data processing.
Where Has All the Power Gone?
Political scientists also increasingly interpret human political structures as data-processing systems. Like capitalism and communism, so democracies and dictatorships are in essence competing mechanisms for gathering and analysing information. Dictatorships use centralised processing methods, whereas democracies prefer distributed processing. Over the past decades democracy gained the upper hand because under the unique conditions of the late twentieth century, distributed processing worked better. Under alternative conditions – those prevailing in the ancient Roman Empire, for instance – centralised processing had an edge, which is why the Roman Republic fell and power shifted from the Senate and popular assemblies into the hands of a single autocratic emperor.
This implies that as data-processing conditions change again in the twenty-first century, democracy might decline and even disappear. As both the volume and speed of data increase, venerable institutions like elections, political parties and parliaments might become obsolete – not because they are unethical, but because they can’t process data efficiently enough. These institutions evolved in an era when politics moved faster than technology. In the nineteenth and twentieth centuries the Industrial Revolution unfolded slowly enough for politicians and voters to remain one step ahead of it and regulate and manipulate its course. Yet whereas the rhythm of politics has not changed much since the days of steam, technology has switched from first gear to fourth. Technological revolutions now outpace political processes, causing MPs and voters alike to lose control.
The rise of the Internet gives us a taste of things to come. Cyberspace is now crucial to our daily lives, our economy and our security. Yet the critical choices between alternative web designs weren’t taken through a democratic political process, even though they involved traditional political issues such as sovereignty, borders, privacy and security. Did you ever vote about the shape of cyberspace? Decisions made by web designers far from the public limelight mean that today the Internet is a free and lawless zone that erodes state sovereignty, ignores borders, abolishes privacy and poses perhaps the most formidable global security risk. Whereas a decade ago it hardly registered on the radar, today hysterical officials are predicting an imminent cyber 9/11.
Governments and NGOs consequently conduct intense debates about restructuring the Internet, but it is much harder to change an existing system than to intervene at its inception. Besides, by the time the cumbersome government bureaucracy makes up its mind about cyber regulation, the Internet will have morphed ten times. The governmental tortoise cannot keep up with the technological hare. It is overwhelmed by data. The NSA may be spying on our every word, but to judge by the repeated failures of American foreign policy, nobody in Washington knows what to do with all the data. Never in history did a government know so much about what’s going on in the world – yet few empires have botched things up as clumsily as the contemporary United States. It’s like a poker player who knows what cards his opponents hold, yet somehow still manages to lose round after round.
In the coming decades it is likely that we will see more Internet-like revolutions, in which technology steals a march on politics. Artificial intelligence and biotechnology might soon overhaul our societies and economies – and our bodies and minds too – but they are hardly a blip on the current political radar. Present-day democratic structures just cannot collect and process the relevant data fast enough, and most voters don’t understand biology and cybernetics well enough to form any pertinent opinions. Hence traditional democratic politics is losing control of events, and is failing to present us with meaningful visions of the future.
Ordinary voters are beginning to sense that the democratic mechanism no longer empowers them. The world is changing all around, and they don’t understand how or why. Power is shifting away from them, but they are unsure where it has gone. In Britain voters imagine that power might have shifted to the EU, so they vote for Brexit. In the USA voters imagine that ‘the establishment’ monopolizes all the power, so they support anti-establishment candidates such as Bernie Sanders and Donald Trump. The sad truth is that nobody knows where all the power has gone. Power will definitely not shift back to ordinary voters if Britain leaves the EU nor if Trump takes over the White House.
That doesn’t mean we will go back to twentieth-century-style dictatorships. Authoritarian regimes seem to be equally overwhelmed by the pace of technological development and the speed and volume of the data flow. In the twentieth century dictators had grand visions for the future. Communists and fascists alike sought to completely destroy the old world and build a new world in its place. Whatever you think about Lenin, Hitler or Mao, you cannot accuse them of lacking vision. Today it seems that leaders have an opportunity to pursue even grander visions. While communists and Nazis tried to create a new society and a new human with the help of steam engines and typewriters, today’s prophets could rely on biotechnology and super-computers.
In science-fiction films ruthless Hitler-like politicians are quick to pounce on such new technologies, putting them in the service of this or that megalomaniacal political ideal. Yet flesh-and-blood politicians in the early twenty-first century, even in authoritarian countries such as Russia, Iran or North Korea, are nothing like their Hollywood counterparts. They don’t seem to be plotting any Brave New World. The wildest dreams of Kim Jong-un and Ali Khamenei don’t extend much beyond atom bombs and ballistic missiles: that is so 1945. Putin’s aspirations seem confined to rebuilding the old Soviet bloc, or the even older tsarist empire. Meanwhile in the USA paranoid Republicans have accused Barack Obama of being a ruthless despot hatching conspiracies to destroy the foundations of American society – yet in eight years of his presidency he barely managed to pass a minor health-care reform. Creating new worlds and new humans was far beyond his agenda.
Precisely because technology is now moving so fast, and parliaments and dictators alike are overwhelmed by data they cannot process quickly enough, present-day politicians are thinking on a far smaller scale than their predecessors a century ago. Consequently, in the early twenty-first century politics is bereft of grand visions. Government has become mere administration. It manages the country, but it no longer leads it. Government ensures that teachers are paid on time and sewage systems don’t overflow, but it has no idea where the country will be in twenty years.
To a certain extent, this is a very good thing. Given that some of the big political visions of the twentieth century led us to Auschwitz, Hiroshima and the Great Leap Forward, maybe we are better off in the hands of petty-minded bureaucrats. Mixing godlike technology with megalomaniacal politics is a recipe for disaster. Many neo-liberal economists and political scientists argue that it is best to leave all the important decisions in the hands of the free market. They thereby give politicians the perfect excuse for inaction and ignorance, which are reinterpreted as profound wisdom. Politicians find it convenient to believe that the reason they don’t understand the world is that they don’t need to understand it.
Yet mixing godlike technology with myopic politics also has its downside. Lack of vision isn’t always a blessing, and not all visions are necessarily bad. In the twentieth century the dystopian Nazi vision did not fall apart spontaneously. It was defeated by the equally grand visions of socialism and liberalism. It is dangerous to trust our future to market forces, because these forces do what’s good for the market rather than what’s good for humankind or for the world. The hand of the market is blind as well as invisible, and left to its own devices it may fail to do anything at all about the threat of global warming or the dangerous potential of artificial intelligence.
Some people believe that there is somebody in charge after all. Not democratic politicians or autocratic despots, but rather a small coterie of billionaires who secretly run the world. But such conspiracy theories never work, because they underestimate the complexity of the system. A few billionaires smoking cigars and drinking Scotch in some back room cannot possibly understand everything happening on the globe, let alone control it. Ruthless billionaires and small interest groups flourish in today’s chaotic world not because they read the map better than anyone else, but because they have very narrow aims. In a chaotic system, tunnel vision has its advantages, and the billionaires’ power is strictly proportional to their goals. When the world’s richest tycoons want to make another billion dollars, they can easily game the system in order to do so. In contrast, if they felt inclined to reduce global inequality or stop global warming, even they wouldn’t be able to do it, because the system is far too complex.
Yet power vacuums seldom last long. If in the twenty-first century traditional political structures can no longer process the data fast enough to produce meaningful visions, then new and more efficient structures will evolve to take their place. These new structures may be very different from any previous political institutions, whether democratic or authoritarian. The only question is who will build and control these structures. If humankind is no longer up to the task, perhaps it might give somebody else a try.
History in a Nutshell
From a Dataist perspective, we may interpret the entire human species as a single data-processing system, with individual humans serving as its chips. If so, we can also understand the whole of history as a process of improving the efficiency of this system through four basic methods:
1. Increasing the number of processors. A city of 100,000 people has more computing power than a village of 1,000 people.
2. Increasing the variety of processors. Different processors may use diverse ways to calculate and analyse data. Using several kinds of processors in a single system may therefore increase its dynamism and creativity. A conversation between a peasant, a priest and a physician may produce novel ideas that would never emerge from a conversation between three hunter-gatherers.
3. Increasing the number of connections between processors. There is little point in increasing the mere number and variety of processors if they are poorly connected to each other. A trade network linking ten cities is likely to result in many more economic, technological and social innovations than ten isolated cities.
4. Increasing the freedom of movement along existing connections. Connecting processors is hardly useful if data cannot flow freely. Just building roads between ten cities won’t be very useful if they are plagued by robbers, or if some paranoid despot doesn’t allow merchants and travellers to move as they wish.
These four methods often contradict one another. The greater the number and variety of processors, the harder it is to freely connect them. The construction of the Sapiens data-processing system accordingly passed through four main stages, each characterised by an emphasis on a different method.
The first stage began with the Cognitive Revolution, which made it possible to connect vast numbers of Sapiens into a single data-processing network. This gave Sapiens a crucial advantage over all other human and animal species. While there is a strict limit to the number of Neanderthals, chimpanzees or elephants you can connect to the same net, there is no limit to the number of Sapiens.
Sapiens used their advantage in data processing to overrun the entire world. However, as they spread into different lands and climates they lost touch with one another and underwent diverse cultural transformations. The result was an immense variety of human cultures, each with its own lifestyle, behaviour patterns and world view. Hence the first phase of history involved an increase in the number and variety of human processors, at the expense of connectivity: 20,000 years ago there were many more Sapiens than 70,000 years ago, and Sapiens in Europe processed information differently from Sapiens in China. However, there were no connections between people in Europe and China, and it would have seemed utterly impossible that all Sapiens could one day be part of a single data-processing web.
The second stage began with the Agricultural Revolution and continued until the invention of writing and money about 5,000 years ago. Agriculture accelerated demographic growth so the number of human processors rose sharply. Simultaneously, agriculture enabled many more people to live together in close proximity, thereby generating dense local networks that contained unprecedented numbers of processors. In addition, agriculture created new incentives and opportunities for different networks to trade and communicate with one another. Nevertheless, during the second phase centrifugal forces remained predominant. In the absence of writing and money humans could not establish cities, kingdoms or empires. Humankind was still divided into innumerable little tribes, each with its own lifestyle and world view. Uniting the whole of humankind was not even a fantasy.
The third stage kicked off with the invention of writing and money about 5,000 years ago, and lasted until the beginning of the Scientific Revolution. Thanks to writing and money the gravitational field of human cooperation finally overpowered the centrifugal forces. Human groups bonded and merged to form cities and kingdoms. Political and commercial links between different cities and kingdoms also tightened. At least since the first millennium BC – when coinage, empires and universal religions appeared – humans began to consciously dream about forging a single network that would encompass the entire globe.
This dream became a reality during the fourth and last stage of history, which began around 1492. Early modern explorers, conquerors and traders wove the first thin threads that encompassed the whole world. In the late modern period these threads were made stronger and denser, so that the spider’s web of Columbus’s days became the steel and asphalt grid of the twenty-first century. Even more importantly, information was allowed to flow increasingly freely throughout this global grid. When Columbus first hooked up the Eurasian net to the American net, only a few bits of data managed to cross the ocean each year, running a gauntlet of cultural prejudices, strict censorship and political repression. But as the years went by the free market, the scientific community, the rule of law and the spread of democracy all helped to dissolve the barriers. We often imagine that democracy and the free market won because they were ‘good’. In truth, they won because they improved the global data-processing system.
So, over the last 70,000 years humankind first spread out, then separated into distinct groups, and finally merged again. Yet the process of unification did not take us back to the beginning. When the diverse human groups fused into the global village of today, each brought along its unique legacy of thoughts, tools and behaviours that it had collected and developed along the way. Our modern larders are now stuffed with Middle Eastern wheat, Andean potatoes, New Guinean sugar and Ethiopian coffee. Similarly, our language, religion, music and politics are replete with heirlooms from across the planet.5
If humankind is indeed a single data-processing system, what is its output? Dataists would say that its output will be the creation of a new and even more efficient data-processing system, called the Internet-of-All-Things. Once this mission is accomplished, Homo sapiens will vanish.
Information Wants to Be Free
Like capitalism, Dataism too began as a neutral scientific theory, but is now mutating into a religion that claims to determine right and wrong. The supreme value of this new religion is ‘information flow’. If life is the movement of information, and if we think that life is good, it follows that we should deepen and broaden the flow of information in the universe. According to Dataism, human experiences are not sacred and Homo sapiens isn’t the apex of creation or a precursor of some future Homo deus. Humans are merely tools for creating the Internet-of-All-Things, which may eventually spread out from planet Earth to pervade the whole galaxy and even the whole universe. This cosmic data-processing system would be like God. It will be everywhere and will control everything, and humans are destined to merge into it.
This conception is reminiscent of some traditional religious visions. Thus Hindus believe that humans can and should merge into the universal soul of the cosmos – the atman. Christians believe that after death saints are infused with the infinite grace of God, whereas sinners cut themselves off from His presence. Indeed, in Silicon Valley the Dataist prophets consciously use traditional messianic language. For example, Ray Kurzweil’s book of prophecies is called The Singularity is Near, echoing John the Baptist’s cry: ‘the kingdom of heaven is near’ (Matthew 3:2).
Dataists explain to those who still worship flesh-and-blood mortals that they are overly attached to outdated technology. Homo sapiens is an obsolete algorithm. After all, what’s the advantage of humans over chickens? Only that in humans information flows in much more complex patterns. Humans absorb more data, and process it using better algorithms than do chickens. (In day-to-day language this means that humans allegedly have deeper emotions and superior intellectual abilities. But remember that according to current biological dogma, emotions and intelligence are just algorithms.) Well then, if we could create a data-processing system that can assimilate even more data than a human being, and process it even more efficiently, wouldn’t that system be superior to a human in exactly the same way that a human is superior to a chicken?
Dataism isn’t limited to idle prophecies. Like every religion, it has its practical commandments. First and foremost a Dataist ought to maximise data flow by connecting to more and more media, and producing and consuming more and more information. Like other successful religions, Dataism is also missionary. Its second commandment is to link everything to the system, including heretics who don’t want to be plugged in. And ‘everything’ means more than just humans. It means every thing. Our bodies, of course, but also cars in the street, refrigerators in kitchens, chickens in their coops and trees in the jungle – all should be connected to the Internet-of-All-Things. The refrigerator will monitor the number of eggs in the drawer, and inform the chicken coop when a new shipment is needed. Cars will talk with one another, and the trees in the jungle will report on the weather and on carbon dioxide levels. We mustn’t leave any part of the universe disconnected from the great web of life. Conversely, the greatest sin would be to block the data flow. What is death, if not a condition in which information doesn’t flow? Hence Dataism upholds the freedom of information as the greatest good of all.
People rarely manage to come up with a completely new value. The last time this happened was in the eighteenth century, when the humanist revolution began preaching the stirring ideals of human liberty, human equality and human fraternity. Since 1789, despite numerous wars, revolutions and upheavals, humans have not managed to conceive of any new value. All subsequent conflicts and struggles have been conducted either in the name of the three humanist values, or in the name of even older ones such as obeying God or serving the nation. Dataism is the first movement since 1789 that created a genuinely novel value: freedom of information.
We mustn’t confuse freedom of information with the old liberal value of freedom of expression. Freedom of expression was given to humans, and protected their right to think and say what they wished – including their right to keep their mouths shut and their thoughts to themselves. Freedom of information, in contrast, is not given to humans. It is given to information. Moreover, this novel value may impinge on humans’ traditional freedom of expression, by privileging the right of information to circulate freely over the right of humans to own data and to restrict its movement.
On 11 January 2013, Dataism got its first martyr when Aaron Swartz, a twenty-six-year-old American hacker, committed suicide in his apartment. Swartz was a rare genius. At fourteen, he helped develop the crucial RSS protocol. Swartz was also a firm believer in the freedom of information. In 2008 he published the ‘Guerilla Open Access Manifesto’, which demanded a free and unlimited flow of information. Swartz said that ‘We need to take information, wherever it is stored, make our copies and share them with the world. We need to take stuff that’s out of copyright and add it to the archive. We need to buy secret databases and put them on the Web. We need to download scientific journals and upload them to file sharing networks. We need to fight for Guerilla Open Access.’
Swartz was as good as his word. He became annoyed with the JSTOR digital library for charging its customers. JSTOR holds millions of scientific papers and studies, and believes in the freedom of expression of scientists and journal editors, which includes the freedom to charge a fee for reading their articles. According to JSTOR, if I want to get paid for the ideas I created, it’s my right to do so. Swartz thought otherwise. He believed that information wants to be free, that ideas don’t belong to the people who created them, and that it is wrong to lock data behind walls and charge an entrance fee. He used the MIT computer network to access JSTOR, and downloaded hundreds of thousands of scientific papers, which he intended to release onto the Internet, so that everybody could read them freely.
Swartz was arrested and put on trial. When he realised that he would probably be convicted and sent to jail, he hanged himself. Hackers reacted with petitions and attacks directed at the academic and government institutions that persecuted Swartz and that infringe on the freedom of information. Under pressure, JSTOR apologised for its part in the tragedy and today allows free access to much, though not all, of its data.6
To convince sceptics Dataist missionaries repeatedly explain the immense benefits of the freedom of information. Just as capitalists believe that all good things depend on economic growth, so Dataists believe all good things – including economic growth – depend on the freedom of information. Why did the USA grow faster than the USSR? Because information flowed more freely in the USA. Why are Americans healthier, wealthier and happier than Iranians or Nigerians? Thanks to the freedom of information. So if we want to create a better world, the key is to set the data free.
We have already seen that Google can detect new epidemics faster than traditional health organisations, but only if we allow it free access to the information we are producing. Free-flowing data can similarly reduce pollution and waste, for example by rationalising the transportation system. In 2010 the number of private cars in the world exceeded 1 billion, and has since kept growing.7 These cars pollute the planet and waste enormous resources, not least by necessitating ever wider roads and more parking spaces. People have become so used to the convenience of private transport that they are unlikely to settle for buses and trains. However, Dataists point out that what people really want is mobility rather than a private car, and a good data-processing system can provide this mobility far more cheaply and efficiently.
I have a private car, but most of the time it sits idly in the parking lot. On a typical day, I enter my car at 8:04, and drive for half an hour to the university, where I park my car for the day. At 18:11 I come back to the car, drive half an hour back home, and that’s it. So I am using my car for just an hour a day. Why do I need to keep it for the other twenty-three hours? Why not create a smart car-pool system, run by computer algorithms? The computer would know that I need to leave home at 8:04 and would route the nearest autonomous car to pick me up at that precise moment. After dropping me off on campus it would be available for other purposes instead of waiting in the parking lot. At 18:11 sharp, as I leave the university gate, another communal car would stop right next to me, and take me home. In this way 50 million communal autonomous cars could replace 1 billion private cars, and we would also need far fewer roads, bridges, tunnels and parking spaces. Provided, of course, that I renounce my privacy and allow the algorithms always to know where I am and where I want to go.
Record, Upload, Share!
But maybe you don’t need convincing, especially if you are under the age of twenty. People just want to be part of the data flow, even if that means giving up their privacy, their autonomy and their individuality. Humanist art sanctifies the individual genius, so a Picasso doodle on a napkin nets millions at Sotheby’s. Humanist science glorifies the individual researcher, and every scholar dreams of putting his or her name at the top of a Science or Nature paper. But a growing number of artistic and scientific creations are nowadays produced by the ceaseless collaboration of ‘everyone’. Who writes Wikipedia? All of us.
The individual is becoming a tiny chip inside a giant system that nobody really understands. Every day I absorb countless data bits through emails, phone calls and articles; process the data; and transmit back new bits through more emails, phone calls and articles. I don’t really know where I fit into the greater scheme of things, or how my bits of data connect with the bits produced by billions of other humans and computers. I don’t have time to find out, because I am too busy answering all the emails. And as I process more data more efficiently – answering more emails, making more phone calls and writing more articles – so I flood the people around me with even more data.
This relentless flow of data sparks new inventions and disruptions that nobody plans, controls or comprehends. No one understands how the global economy functions or where global politics is heading. But no one needs to understand. All you need to do is answer your emails faster – and allow the system to read them. Just as free-market capitalists believe in the invisible hand of the market, so Dataists believe in the invisible hand of the data flow.
As the global data-processing system becomes all-knowing and all-powerful, so connecting to the system becomes the source of all meaning. Humans want to merge into the data flow because when you are part of the data flow you are part of something much bigger than yourself. Traditional religions assured you that your every word and action was part of some great cosmic plan, and that God watched you every minute and cared about all your thoughts and feelings. Data religion now says that your every word and action is part of the great data flow, that the algorithms are constantly watching you and that they care about everything you do and feel. Most people like this very much. For true-believers, to be disconnected from the data flow risks losing the very meaning of life. What’s the point of doing or experiencing anything if nobody knows about it, and if it doesn’t contribute something to the global exchange of information?
Humanism holds that experiences occur inside us, and that we ought to find within ourselves the meaning of all that happens, thereby infusing the universe with meaning. Dataists believe that experiences are valueless if they are not shared, and that we need not – indeed cannot – find meaning within ourselves. We need only record and connect our experiences to the great data flow, and the algorithms will discover their meaning and tell us what to do. Twenty years ago Japanese tourists were a universal laughing stock because they always carried cameras and took pictures of everything in sight. Now everyone is doing it. If you go to India and see an elephant, you don’t look at the elephant and ask yourself, ‘What do I feel?’ – you are too busy looking for your smartphone, taking a picture of the elephant, posting it on Facebook and then checking your account every two minutes to see how many Likes you got. Writing a private diary – a common humanist practice in previous generations – sounds to many present-day youngsters utterly pointless. Why write anything if nobody else can read it? The new motto says: ‘If you experience something – record it. If you record something – upload it. If you upload something – share it.’
Throughout this book we have repeatedly asked what makes humans superior to other animals. Dataism has a new and simple answer. In themselves human experiences are not superior at all to the experiences of wolves or elephants. One bit of data is as good as another. However, humans can write poems and blogs about their experiences and post them online, thereby enriching the global data-processing system. That makes their bits count. Wolves cannot do this. Hence all the experiences of wolves – as deep and complex as they may be – are worthless. No wonder we are so busy converting our experiences into data. It isn’t a question of trendiness. It is a question of survival. We must prove to ourselves and to the system that we still have value. And value lies not in having experiences, but in turning these experiences into free-flowing data.
(By the way, wolves – or at least their dog cousins – aren’t a hopeless case. A company called ‘No More Woof’ is developing a helmet for reading canine experiences. The helmet monitors the dog’s brain waves, and uses computer algorithms to translate simple sentiments such as ‘I am angry’ into human language.8 Your dog may soon have a Facebook or Twitter account of his own – perhaps with more Likes and followers than you.)
Know Thyself
Dataism is neither liberal nor humanist. It should be emphasised, however, that Dataism isn’t anti-humanist. It has nothing against human experiences. It just doesn’t think they are intrinsically valuable. When we surveyed the three main humanist sects, we asked which experience is the most valuable: listening to Beethoven’s Fifth Symphony, to Chuck Berry, to a pygmy initiation song or to the howl of a wolf in heat. A Dataist would argue that the entire exercise is misguided, because music should be evaluated according to the data it carries rather than according to the experience it creates. A Dataist might explain, for example, that the Fifth Symphony carries far more data than the pygmy initiation song, because it uses more chords and scales and creates dialogues with many more musical styles. Consequently, you need far more computational power to decipher the Fifth Symphony, and you gain far more knowledge from doing so.
Music, according to this view, is mathematical patterns. Mathematics can describe every musical piece, as well as the relations between any two pieces. Hence you can measure the precise data value of every symphony, song and howl, and determine which is the richest. The experiences they create in humans or wolves don’t really matter. True, for the last 70,000 years or so, human experiences have been the most efficient data-processing algorithms in the universe, hence there was good reason to sanctify them. However, we may soon reach a point when these algorithms will be superseded, and even become a burden.
Sapiens evolved in the African savannah tens of thousands of years ago, and their algorithms are just not built to handle twenty-first-century data flows. We might try to upgrade the human data-processing system, but this may not be enough. The Internet-of-All-Things may soon create such huge and rapid data flows that even upgraded human algorithms would not be able to handle them. When cars replaced horse-drawn carriages, we didn’t upgrade the horses – we retired them. Perhaps it is time to do the same with Homo sapiens.
Dataism adopts a strictly functional approach to humanity, appraising the value of human experiences according to their function in data-processing mechanisms. If we develop an algorithm that fulfils the same function better, human experiences will lose their value. Thus if we can replace not just taxi drivers and doctors but also lawyers, poets and musicians with superior computer programs, why should we care if these programs have no consciousness and no subjective experiences? If some humanist starts adulating the sacredness of human experience, Dataists would dismiss such sentimental humbug. ‘The experience you are praising is just an outdated biochemical algorithm. In the African savannah 70,000 years ago, that algorithm was state-of-the-art. Even in the twentieth century it was vital for the army and for the economy. But soon we will have much better algorithms.’
In the climactic scene of many Hollywood science-fiction movies, humans face an alien invasion fleet, an army of rebellious robots or an all-knowing super-computer that intends to obliterate them. Humanity seems doomed. But at the very last moment, against all odds, humanity triumphs thanks to something that the aliens, the robots and the super-computers didn’t suspect and cannot fathom: love. The hero, who up till now has been easily manipulated by the super-computer and riddled with bullets by the evil robots, is inspired by his sweetheart to make a completely unexpected move that turns the tables on the thunderstruck Matrix. Dataism finds such scenarios utterly ridiculous. ‘Come on,’ it admonishes the Hollywood screenwriters, ‘is that all you could come up with? Love? And not even some platonic cosmic love, but the carnal attraction between two mammals? Do you really think that an all-knowing super-computer or aliens who contrived to conquer the entire galaxy would be dumbfounded by a hormonal rush?’
By equating the human experience with data patterns, Dataism undermines our primary source of authority and meaning and heralds a tremendous religious revolution, the like of which has not been seen since the eighteenth century. In the days of Locke, Hume and Voltaire humanists argued that ‘God is a product of the human imagination’. Dataism now gives humanists a taste of their own medicine, and tells them: ‘Yes, God is a product of the human imagination, but human imagination in turn is just the product of biochemical algorithms.’ In the eighteenth century, humanism sidelined God by shifting from a deo-centric to a homo-centric world view. In the twenty-first century, Dataism may sideline humans by shifting from a homo-centric to a data-centric view.
The Dataist revolution will probably take a few decades, if not a century or two. But then the humanist revolution too did not happen overnight. At first humans kept on believing in God, arguing that humans are sacred because they were created by God for some divine purpose. Only much later did some people dare say that humans are sacred in their own right, and that God doesn’t exist at all. Similarly, today most Dataists claim that the Internet-of-All-Things is sacred because humans are creating it to serve human needs. But eventually the Internet-of-All-Things may become sacred in its own right.
The shift from a homo-centric to a data-centric world view won’t be merely a philosophical revolution. It will be a practical revolution. All truly important revolutions are practical. The humanist idea that ‘humans invented God’ was significant because it had far-reaching practical implications. Similarly, the Dataist idea that ‘organisms are algorithms’ is significant due to its day-to-day practical consequences. Ideas change the world only when they change our behaviour.
In ancient Babylon, when people faced a difficult dilemma they climbed in the darkness of night to the top of the local temple and observed the sky. The Babylonians believed that the stars controlled their fate and predicted their future. By watching the stars the Babylonians decided whether to get married, plough the fields and go to war. Their philosophical beliefs were translated into very practical procedures.
Scriptural religions such as Judaism and Christianity told a different story: ‘The stars are lying. God, who created the stars, revealed the entire truth in the Bible. So stop observing the stars – read the Bible instead!’ This too was a practical recommendation. When people didn’t know whom to marry, what career to choose or whether to start a war, they read the Bible and followed its counsel.
Next came the humanists with an altogether new story: ‘Humans invented God, wrote the Bible and then interpreted it in a thousand different ways. So humans themselves are the source of all truth. You may read the Bible as an inspiring human creation, but you don’t really need to. If you are facing any dilemma, just listen to yourself and follow your inner voice.’ Humanism then gave detailed practical instructions on how to listen to yourself, recommending techniques such as watching sunsets, reading Goethe, keeping a private diary, having heart-to-heart talks with a good friend and holding democratic elections.
For centuries scientists too accepted these humanist guidelines. When physicists wondered whether or not to get married, they too watched sunsets and tried to get in touch with themselves. When chemists contemplated whether to accept a problematic job offer, they too wrote diaries and had heart-to-heart talks with a good friend. When biologists debated whether to wage war or sign a peace treaty, they too voted in democratic elections. When brain scientists wrote books about their startling discoveries, they often put an inspiring Goethe quote on the first page. This was the basis for the modern alliance between science and humanism, which kept the delicate balance between the modern yang and the modern yin – between reason and emotion, between the laboratory and the museum, between the production line and the supermarket.
The scientists not only sanctified human feelings, but also found an excellent evolutionary reason to do so. After Darwin, biologists began explaining that feelings are complex algorithms honed by evolution to help animals make correct decisions. Our love, our fear and our passion aren’t some nebulous spiritual phenomena good only for composing poetry. Rather, they encapsulate millions of years of practical wisdom. When you read the Bible you are getting advice from a few priests and rabbis who lived in ancient Jerusalem. In contrast, when you listen to your feelings, you follow an algorithm that evolution has developed for millions of years, and that withstood the harshest quality-control tests of natural selection. Your feelings are the voice of millions of ancestors, each of whom managed to survive and reproduce in an unforgiving environment. Your feelings are not infallible, of course, but they are better than most other sources of guidance. For millions upon millions of years, feelings were the best algorithms in the world. Hence in the days of Confucius, of Muhammad or of Stalin, people should have listened to their feelings rather than to the teachings of Confucianism, Islam or communism.
Yet in the twenty-first century, feelings are no longer the best algorithms in the world. We are developing superior algorithms that utilise unprecedented computing power and giant databases. The Google and Facebook algorithms not only know exactly how you feel, they also know myriad other things about you that you hardly suspect. Consequently you should stop listening to your feelings and start listening to these external algorithms instead. What’s the point of having democratic elections when the algorithms know not only how each person is going to vote, but also the underlying neurological reasons why one person votes Democrat while another votes Republican? Whereas humanism commanded: ‘Listen to your feelings!’ Dataism now commands: ‘Listen to the algorithms! They know how you feel.’
When you contemplate whom to marry, which career to pursue and whether to start a war, Dataism tells you that it would be a complete waste of time to climb a high mountain and watch the sun setting into the waves. It would be equally futile to visit a museum, write a private diary or have a heart-to-heart talk with a friend. Yes, in order to make the right decisions you must get to know yourself better. But if you want to know yourself in the twenty-first century, there are much better methods than climbing mountains, going to museums or writing diaries. Here are some practical Dataist guidelines for you:
‘You want to know who you really are?’ asks Dataism. ‘Then forget about mountains and museums. Have you had your DNA sequenced? No?! What are you waiting for? Go and do it today. And convince your grandparents, parents and siblings to have their DNA sequenced too – their data is very valuable for you. And have you heard about these wearable biometric devices that measure your blood pressure and heart rate twenty-four hours a day? Good – so buy one of those, put it on and connect it to your smartphone. And while you are shopping, buy a mobile camera and microphone, record everything you do, and put in online. And allow Google and Facebook to read all your emails, monitor all your chats and messages, and keep a record of all your Likes and clicks. If you do all that, then the great algorithms of the Internet-of-All-Things will tell you whom to marry, which career to pursue and whether to start a war.’
But where do these great algorithms come from? This is the mystery of Dataism. Just as according to Christianity we humans cannot understand God and His plan, so Dataism declares that the human brain cannot fathom the new master algorithms. At present, of course, the algorithms are mostly written by human hackers. Yet the really important algorithms – such as the Google search algorithm – are developed by huge teams. Each member understands just one part of the puzzle, and nobody really understands the algorithm as a whole. Moreover, with the rise of machine learning and artificial neural networks, more and more algorithms evolve independently, improving themselves and learning from their own mistakes. They analyse astronomical amounts of data that no human can possibly encompass, and learn to recognise patterns and adopt strategies that escape the human mind. The seed algorithm may initially be developed by humans, but as it grows it follows its own path, going where no human has gone before – and where no human can follow.
A Ripple in the Data Flow
Dataism naturally has its critics and heretics. As we saw in Chapter 3, it’s doubtful whether life can really be reduced to data flows. In particular, at present we have no idea how or why data flows could produce consciousness and subjective experiences. Maybe we’ll have a good explanation in twenty years. But maybe we’ll discover that organisms aren’t algorithms after all.
It is equally doubtful whether life boils down to mere decision-making. Under Dataist influence both the life sciences and the social sciences have become obsessed with decision-making processes, as if that’s all there is to life. But is it so? Sensations, emotions and thoughts certainly play an important part in making decisions, but is that their sole meaning? Dataism is gaining a better and better understanding of decision-making processes, but it might be adopting an increasingly skewed view of life.
A critical examination of Dataist dogma is likely to be not only the greatest scientific challenge of the twenty-first century, but also the most urgent political and economic project. Scholars in the life sciences and social sciences should ask themselves whether we miss anything when we understand life as data processing and decision-making. Is there perhaps something in the universe that cannot be reduced to data? Suppose non-conscious algorithms could eventually outperform conscious intelligence in all known data-processing tasks – what, if anything, would be lost by replacing conscious intelligence with superior non-conscious algorithms?
Of course, even if Dataism is wrong and organisms aren’t just algorithms, it won’t necessarily prevent Dataism from taking over the world. Many previous religions gained enormous popularity and power despite their factual inaccuracies. If Christianity and communism could do it, why not Dataism? Dataism has especially good prospects, because it is currently spreading across all scientific disciplines. A unified scientific paradigm may easily become an unassailable dogma. It is very difficult to contest a scientific paradigm, but up till now, no single paradigm has been adopted by the entire scientific establishment. Hence scholars in one field could always import heretical views from outside. But if everyone from musicologists to biologists uses the same Dataist paradigm, interdisciplinary excursions will serve only to strengthen the paradigm further. Consequently even if the paradigm is flawed, it would be extremely difficult to resist.
If Dataism succeeds in conquering the world, what will happen to us humans? Initially, Dataism will probably accelerate the humanist pursuit of health, happiness and power. Dataism spreads itself by promising to fulfil these humanist aspirations. In order to achieve immortality, bliss and divine powers of creation, we need to process immense amounts of data, far beyond the capacity of the human brain. So the algorithms will do it for us. Yet once authority shifts from humans to algorithms, the humanist projects may become irrelevant. Once we abandon the homo-centric world view in favour of a data-centric world view, human health and happiness may seem far less important. Why bother so much about obsolete data-processing machines when far superior models are already in existence? We are striving to engineer the Internet-of-All-Things in the hope that it will make us healthy, happy and powerful. Yet once the Internet-of-All-Things is up and running, humans might be reduced from engineers to chips, then to data, and eventually we might dissolve within the torrent of data like a clump of earth within a gushing river.
Dataism thereby threatens to do to Homo sapiens what Homo sapiens has done to all other animals. Over the course of history humans created a global network and evaluated everything according to its function within that network. For thousands of years this inflated human pride and prejudices. Since humans fulfilled the most important functions in the network, it was easy for us to take credit for the network’s achievements, and to see ourselves as the apex of creation. The lives and experiences of all other animals were undervalued because they fulfilled far less important functions, and whenever an animal ceased to fulfil any function at all, it went extinct. However, once we humans lose our functional importance to the network, we will discover that we are not the apex of creation after all. The yardsticks that we ourselves have enshrined will condemn us to join the mammoths and Chinese river dolphins in oblivion. Looking back, humanity will turn out to have been just a ripple within the cosmic data flow.
We cannot really predict the future, because technology is not deterministic. The same technology could create very different kinds of societies. For example, the technology of the Industrial Revolution – trains, electricity, radio, telephone – could be used to establish communist dictatorships, fascist regimes or liberal democracies. Consider South Korea and North Korea: They have had access to exactly the same technology, but they have chosen to employ it in very different ways.
The rise of AI and biotechnology will certainly transform the world, but it does not mandate a single deterministic outcome. All the scenarios outlined in this book should be understood as possibilities rather than prophecies. If you don’t like some of these possibilities you are welcome to think and behave in new ways that will prevent these particular possibilities from materializing.
However, it is not easy to think and behave in new ways, because our thoughts and actions are usually constrained by present-day ideologies and social systems. This book traces the origins of our present-day conditioning in order to loosen its grip and enable us to think in far more imaginative ways about our future. Instead of narrowing our horizons by forecasting a single definitive scenario, the book aims to broaden our horizons and make us aware of a much wider spectrum of options. As I have repeatedly emphasised, nobody really knows what the job market, the family or the ecology will look like in 2050, or which religions, economic systems and political structures will dominate the world.
Yet broadening our horizons can backfire by making us more confused and inactive than before. With so many scenarios and possibilities, what should we pay attention to? The world is changing faster than ever before, and we are flooded by impossible amounts of data, of ideas, of promises and of threats. Humans are relinquishing authority to the free market, to crowd wisdom and to external algorithms partly because we cannot deal with the deluge of data. In the past, censorship worked by blocking the flow of information. In the twenty-first century censorship works by flooding people with irrelevant information. We just don’t know what to pay attention to, and often spend our time investigating and debating side issues. In ancient times having power meant having access to data. Today having power means knowing what to ignore. So considering everything that is happening in our chaotic world, what should we focus on?
If we think in term of months, we had probably focus on immediate problems such as the turmoil in the Middle East, the refugee crisis in Europe and the slowing of the Chinese economy. If we think in terms of decades, then global warming, growing inequality and the disruption of the job market loom large. Yet if we take the really grand view of life, all other problems and developments are overshadowed by three interlinked processes:
1. Science is converging on an all-encompassing dogma, which says that organisms are algorithms and life is data processing.
2. Intelligence is decoupling from consciousness.
3. Non-conscious but highly intelligent algorithms may soon know us better than we know ourselves.
These three processes raise three key questions, which I hope will stick in your mind long after you have finished this book:
1. Are organisms really just algorithms, and is life really just data processing?
2. What’s more valuable – intelligence or consciousness?
3. What will happen to society, politics and daily life when non-conscious but highly intelligent algorithms know us better than we know ourselves?
1. Tim Blanning, The Pursuit of Glory (New York: Penguin Books, 2008), 52.
2. Ibid., 53. See also: J. Neumann and S. Lindgrén, ‘Great Historical Events That Were Significantly Affected by the Weather: 4, The Great Famines in Finland and Estonia, 1695–97’, Bulletin of the American Meteorological Society 60 (1979), 775–87; Andrew B. Appleby, ‘Epidemics and Famine in the Little Ice Age’, Journal of Interdisciplinary History 10:4 (1980), 643–63; Cormac Ó Gráda and Jean-Michel Chevet, ‘Famine and Market in Ancien Régime France’, Journal of Economic History 62:3 (2002), 706–73.
3. Nicole Darmon et al., ‘L’insécurité alimentaire pour raisons financières en France’, Observatoire National de la Pauvreté et de l’Exclusion Sociale, https://www.onpes.gouv.fr/IMG/pdf/Darmon.pdf, accessed 3 March 2015; Rapport Annuel 2013, Banques Alimetaires, http://en.calameo.com/read/001358178ec47d2018425, accessed 4 March 2015.
4. Richard Dobbs et al., ‘How the World Could Better Fight Obesity’, McKinseys & Company, November 2014, accessed 11 December 2014, http://www.mckinsey.com/insights/economic_studies/how_the_world_could_better_fight_obesity.
5. ‘Global Burden of Disease, Injuries and Risk Factors Study 2013’, Lancet, 18 December 2014, accessed 18 December 2014, http://www.thelancet.com/themed/global-burden-of-disease; Stephen Adams, ‘Obesity Killing Three Times As Many As Malnutrition’, Telegraph, 13 December 2012, accessed 18 December 2014, http://www.telegraph.co.uk/health/healthnews/9742960/Obesity-killing-three-times-as-many-asmalnutrition.html.
6. Robert S. Lopez, The Birth of Europe [in Hebrew] (Tel Aviv: Dvir, 1990), 427.
7. Alfred W. Crosby, The Columbian Exchange: Biological and Cultural Consequences of 1492 (Westport: Greenwood Press, 1972); William H. McNeill, Plagues and Peoples (Oxford: Basil Blackwell, 1977).
8. Hugh Thomas, Conquest: Cortes, Montezuma and the Fall of Old Mexico (New York: Simon & Schuster, 1993), 443–6; Rodolfo Acuna-Soto et al., ‘Megadrought and Megadeath in 16th Century Mexico’, Historical Review 8:4 (2002), 360–2; Sherburne F. Cook and Lesley Byrd Simpson, The Population of Central Mexico in the Sixteenth Century (Berkeley: University of California Press, 1948).
9. Jared Diamond, Guns, Germs and Steel: The Fates of Human Societies [in Hebrew] (Tel Aviv: Am Oved, 2002), 167.
10. Jeffery K. Taubenberger and David M. Morens, ‘1918 Influenza: The Mother of All Pandemics’, Emerging Infectious Diseases 12:1 (2006), 15–22; Niall P. A. S. Johnson and Juergen Mueller, ‘Updating the Accounts: Global Mortality of the 1918–1920 “Spanish” Influenza Pandemic’, Bulletin of the History of Medicine 76:1 (2002), 105–15; Stacey L. Knobler, Alison Mack, Adel Mahmoud et al., (eds), The Threat of Pandemic Influenza: Are We Ready? Workshop Summary (Washington DC: National Academies Press, 2005), 57–110; David van Reybrouck, Congo: The Epic History of a People (New York: HarperCollins, 2014), 164; Siddharth Chandra, Goran Kuljanin and Jennifer Wray, ‘Mortality from the Influenza Pandemic of 1918–1919: The Case of India’, Demography 49:3 (2012), 857–65; George C. Kohn, Encyclopedia of Plague and Pestilence: From Ancient Times to the Present, 3rd edn (New York: Facts on File, 2008), 363.
11. The averages between 2005 and 2010 were 4.6 per cent globally, 7.9 per cent in Africa and 0.7 per cent in Europe and North America. See: ‘Infant Mortality Rate (Both Sexes Combined) by Major Area, Region and Country, 1950–2010 (Infant Deaths for 1000 Live Births), Estimates’, World Population Prospects: the 2010 Revision, UN Department of Economic and Social Affairs, April 2011, accessed 26 May 2012, http://esa.un.org/unpd/wpp/Excel-Data/mortality.htm. See also Alain Bideau, Bertrand Desjardins and Hector Perez-Brignoli (eds), Infant and Child Mortality in the Past (Oxford: Clarendon Press, 1997); Edward Anthony Wrigley et al., English Population History from Family Reconstitution, 1580–1837 (Cambridge: Cambridge University Press, 1997), 295–6, 303.
12. David A. Koplow, Smallpox: The Fight to Eradicate a Global Scourge (Berkeley: University of California Press, 2004); Abdel R. Omran, ‘The Epidemiological Transition: A Theory of Population Change’, Milbank Memorial Fund Quarterly 83:4 (2005), 731–57; Thomas McKeown, The Modern Rise of Populations (New York: Academic Press, 1976); Simon Szreter, Health and Wealth: Studies in History and Policy (Rochester: University of Rochester Press, 2005); Roderick Floud, Robert W. Fogel, Bernard Harris and Sok Chul Hong, The Changing Body: Health, Nutrition and Human Development in the Western World Since 1700 (New York: Cambridge University Press, 2011); James C. Riley, Rising Life Expectancy: A Global History (New York: Cambridge University Press, 2001).
13. ‘Cholera’, World Health Organization, February 2014, accessed 18 December 2014, http://www.who.int/mediacentre/factsheets/fs107/en/index.html.
14. ‘Experimental Therapies: Growing Interest in the Use of Whole Blood or Plasma from Recovered Ebola Patients’, World Health Organization, 26 September 2014, accessed 23 April 2015, http://www.who.int/mediacentre/news/ebola/26-september-2014/en/.
15. Hung Y. Fan, Ross F. Conner and Luis P. Villarreal, AIDS: Science and Society, 6th edn (Sudbury: Jones and Bartlett Publishers, 2011).
16. Peter Piot and Thomas C. Quinn, ‘Response to the AIDS Pandemic – A Global Health Model’, New England Journal of Medicine 368:23 (2013), 2210–18.
17. ‘Old age’ is never listed as a cause of death in official statistics. Instead, when a frail old woman eventually succumbs to this or that infection, the particular infection will be listed as the cause of death. Hence, officially, infectious diseases still account for more than 20 per cent of deaths. But this is a fundamentally different situation than in past centuries, when large numbers of children and fit adults died from infectious diseases.
18. David M. Livermore, ‘Bacterial Resistance: Origins, Epidemiology, and Impact’, Clinical Infectious Diseases 36:s1 (2005), s11–23; Richards G. Wax et al. (eds), Bacterial Resistance to Antimicrobials, 2nd edn (Boca Raton: CRC Press, 2008); Maja Babic and Robert A. Bonomo, ‘Mutations as a Basis of Antimicrobial Resistance’, in Antimicrobial Drug Resistance: Mechanisms of Drug Resistance, ed. Douglas Mayers, vol. 1 (New York: Humana Press, 2009), 65–74; Julian Davies and Dorothy Davies, ‘Origins and Evolution of Antibiotic Resistance’, Microbiology and Molecular Biology Reviews 74:3 (2010), 417–33; Richard J. Fair and Yitzhak Tor, ‘Antibiotics and Bacterial Resistance in the 21st Century’, Perspectives in Medicinal Chemistry 6 (2014), 25–64.
19. Alfonso J. Alanis, ‘Resistance to Antibiotics: Are We in the Post-Antibiotic Era?’, Archives of Medical Research 36:6 (2005), 697–705; Stephan Harbarth and Matthew H. Samore, ‘Antimicrobial Resistance Determinants and Future Control’, Emerging Infectious Diseases 11:6 (2005), 794–801; Hiroshi Yoneyama and Ryoichi Katsumata, ‘Antibiotic Resistance in Bacteria and Its Future for Novel Antibiotic Development’, Bioscience, Biotechnology and Biochemistry 70:5 (2006), 1060–75; Cesar A. Arias and Barbara E. Murray, ‘Antibiotic-Resistant Bugs in the 21st Century – A Clinical Super-Challenge’, New England Journal of Medicine 360 (2009), 439–43; Brad Spellberg, John G. Bartlett and David N. Gilbert, ‘The Future of Antibiotics and Resistance’, New England Journal of Medicine 368 (2013), 299–302.
20. Losee L. Ling et al., ‘A New Antibiotic Kills Pathogens without Detectable Resistance’, Nature 517 (2015), 455–9; Gerard Wright, ‘Antibiotics: An Irresistible Newcomer’, Nature 517 (2015), 442–4.
21. Roey Tzezana, The Guide to the Future [in Hebrew] (Haifa: Roey Tzezana, 2013), 209–33.
22. Azar Gat, War in Human Civilization (Oxford: Oxford University Press, 2006), 130–1; Steven Pinker, The Better Angels of Our Nature: Why Violence Has Declined (New York: Viking, 2011); Joshua S. Goldstein, Winning the War on War: The Decline of Armed Conflict Worldwide (New York: Dutton, 2011); Robert S. Walker and Drew H. Bailey, ‘Body Counts in Lowland South American Violence’, Evolution and Human Behavior 34:1 (2013), 29–34; I. J. N. Thorpe, ‘Anthropology, Archaeology, and the Origin of Warfare’, World Archaeology 35:1 (2003), 145–65; Raymond C. Kelly, Warless Societies and the Origin of War (Ann Arbor: University of Michigan Press, 2000); Lawrence H. Keeley, War Before Civilization: The Myth of the Peaceful Savage (Oxford: Oxford University Press, 1996); Slavomil Vencl, ‘Stone Age Warfare’, in Ancient Warfare: Archaeological Perspectives, ed. John Carman and Anthony Harding (Stroud: Sutton Publishing, 1999), 57–73.
23. ‘Global Health Observatory Data Repository, 2012’, World Health Organization, accessed 16 August 2015, http://apps.who.int/gho/data/node.main.RCODWORLD?lang=en; ‘Global Study on Homicide, 2013’, UNDOC, accessed 16 August 2015, http://www.unodc.org/documents/gsh/pdfs/2014_GLOBAL_HOMICIDE_BOOK_web.pdf; http://www.who.int/healthinfo/global_burden_disease/estimates/en/index1.html.
24. Van Reybrouck, Congo, 456–7.
25. Deaths from obesity: ‘Global Burden of Disease, Injuries and Risk Factors Study 2013’, Lancet, 18 December 2014, accessed 18 December 2014, http://www.thelancet.com/themed/global-burden-of-disease; Stephen Adams, ‘Obesity Killing Three Times as Many as Malnutrition’, Telegraph, 13 December 2012, accessed 18 December 2014, http://www.telegraph.co.uk/health/healthnews/9742960/Obesity-killing-three-times-as-manyas-malnutrition.html. Deaths from terrorism: Global Terrorism Database, http://www.start.umd.edu/gtd/, accessed 16 January 2016.
26. Arion McNicoll, ‘How Google’s Calico Aims to Fight Aging and “Solve Death”’, CNN, 3 October 2013, accessed 19 December 2014, http://edition.cnn.com/2013/10/03/tech/innovation/google-calicoaging-death/.
27. Katrina Brooker, ‘Google Ventures and the Search for Immortality’, Bloomberg, 9 March 2015, accessed 15 April 2015, http://www.bloomberg.com/news/articles/2015-03-09/googleventures-bill-maris-investing-in-idea-of-living-to-500.
28. Mick Brown, ‘Peter Thiel: The Billionaire Tech Entrepreneur on a Mission to Cheat Death’, Telegraph, 19 September 2014, accessed 19 December 2014, http://www.telegraph.co.uk/technology/11098971/Peter-Thiel-the-billionaire-tech-entrepreneur-on-a-mission-to-cheatdeath.html.
29. Kim Hill et al., ‘Mortality Rates Among Wild Chimpanzees’, Journal of Human Evolution 40:5 (2001), 437–50; James G. Herndon, ‘Brain Weight Throughout the Life Span of the Chimpanzee’, Journal of Comparative Neurology 409 (1999), 567–72.
30. Beatrice Scheubel, Bismarck’s Institutions: A Historical Perspective on the Social Security Hypothesis (Tubingen: Mohr Siebeck, 2013); E. P. Hannock, The Origin of the Welfare State in England and Germany, 1850–1914 (Cambridge: Cambridge University Press, 2007).
31. ‘Mental Health: Age Standardized Suicide Rates (per 100,000 population), 2012’, World Health Organization, accessed 28 December 2014, http://gamapserver.who.int/gho/interactive_charts/mental_health/suicide_rates/atlas.html.
32. Ian Morris, Why the West Rules – For Now (Toronto: McClelland & Stewart, 2010), 626–9.
33. David G. Myers, ‘The Funds, Friends, and Faith of Happy People’, American Psychologist 55:1 (2000), 61; Ronald Inglehart et al., ‘Development, Freedom, and Rising Happiness: A Global Perspective (1981–2007)’, Perspectives on Psychological Science 3:4 (2008), 264–85. See also Mihaly Csikszentmihalyi, ‘If We Are So Rich, Why Aren’t We Happy?’, American Psychologist 54:10 (1999), 821–7; Gregg Easterbrook, The Progress Paradox: How Life Gets Better While People Feel Worse (New York: Random House, 2003).
34. Kenji Suzuki, ‘Are They Frigid to the Economic Development? Reconsideration of the Economic Effect on Subjective Well-being in Japan’, Social Indicators Research 92:1 (2009), 81–9; Richard A. Easterlin, ‘Will Raising the Incomes of all Increase the Happiness of All?’, Journal of Economic Behavior and Organization 27:1 (1995), 35–47; Richard A. Easterlin, ‘Diminishing Marginal Utility of Income? Caveat Emptor’, Social Indicators Research 70:3 (2005), 243–55.
35. Linda C. Raeder, John Stuart Mill and the Religion of Humanity (Columbia: University of Missouri Press, 2002).
36. Oliver Turnbull and Mark Solms, The Brain and the Inner World [in Hebrew] (Tel Aviv: Hakibbutz Hameuchad, 2005), 92–6; Kent C. Berridge and Morten L. Kringelbach, ‘Affective Neuroscience of Pleasure: Reward in Humans and Animals’, Psychopharmacology 199 (2008), 457–80; Morten L. Kringelbach, The Pleasure Center: Trust Your Animal Instincts (Oxford: Oxford University Press, 2009).
37. M. Csikszentmihalyi, Finding Flow: The Psychology of Engagement with Everyday Life (New York: Basic Books, 1997).
38. Centers for Disease Control and Prevention, Attention-Deficit/Hyperactivity Disorder (ADHD), http://www.cdc.gov/ncbddd/adhd/data.html, accessed 4 January 2016; Sarah Harris, ‘Number of Children Given Drugs for ADHD Up Ninefold with Patients As Young As Three Being Prescribed Ritalin’, Daily Mail, 28 June 2013, http://www.dailymail.co.uk/health/article-2351427/Number-children-given-drugs-ADHD-ninefoldpatients-young-THREE-prescribed-Ritalin.html, accessed 4 January 2016; International Narcotics Control Board (UN), Psychotropic Substances, Statistics for 2013, Assessments of Annual Medical and Scientific Requirements 2014, 39–40.
39. There is insufficient evidence regarding the abuse of such stimulants by schoolchildren, but a 2013 study has found that between 5 and 15 per cent of US college students illegally used some kind of stimulant at least once: C. Ian Ragan, Imre Bard and Ilina Singh, ‘What Should We Do about Student Use of Cognitive Enhancers? An Analysis of Current Evidence’, Neuropharmacology 64 (2013), 589.
40. Bradley J. Partridge, ‘Smart Drugs “As Common as Coffee”: Media Hype about Neuroenhancement’, PLoS One 6:11 (2011), e28416.
41. Office of the Chief of Public Affairs Press Release, ‘Army, Health Promotion Risk Reduction Suicide Prevention Report, 2010’, accessed 23 December 2014, http://csf2.army.mil/downloads/HP-RR-SPReport2010. pdf; Mark Thompson, ‘America’s Medicated Army’, Time, 5 June 2008, accessed 19 December 2014, http://content.time.com/time/magazine/article/0,9171,1812055,00.html; Office of the Surgeon Multi-National Force–Iraq and Office of the Command Surgeon, ‘Mental Health Advisory Team (MHAT) V Operation Iraqi Freedom 06–08: Iraq Operation Enduring Freedom 8: Afghanistan’, 14 February 2008, accessed 23 December 2014, http://www.careforthetroops.org/reports/Report-MHATV-4-FEB-2008-Overview.pdf.
42. Tina L. Dorsey, ‘Drugs and Crime Facts’, US Department of Justice, accessed 20 February 2015, http://www.bjs.gov/content/pub/pdf/dcf.pdf; H. C. West, W. J. Sabol and S. J. Greenman, ‘Prisoners in 2009’, US Department of Justice, Bureau of Justice Statistics Bulletin (December 2010), 1–38; ‘Drugs and Crime Facts: Drug Use and Crime’, US Department of Justice, accessed 19 December 2014, http://www.bjs.gov/content/dcf/duc.cfm; ‘Offender Management Statistics Bulletin, July to September 2014’, UK Ministry of Justice, 29 January 2015, accessed 20 February 2015, https://www.gov.uk/government/statistics/offender-management-statistics-quarterlyjuly-to-september-2014.; Mirian Lights et al., ‘Gender Differences in Substance Misuse and Mental Health amongst Prisoners’, UK Ministry of Justice, 2013, accessed 20 February 2015, https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/220060/gender-substance-misuse-mental-health-prisoners.pdf; Jason Payne and Antonette Gaffney, ‘How Much Crime is Drug or Alcohol Related? Self-Reported Attributions of Police Detainees’, Trends and Issues in Crime and Criminal Justice 439 (2012), http://www.aic.gov.au/media_library/publications/tandi_pdf/tandi439.pdf, accessed 11 March 2015; Philippe Robert, ‘The French Criminal Justice System’, in Punishment in Europe: A Critical Anatomy of Penal Systems, ed. Vincenzo Ruggiero and Mick Ryan (Houndmills: Palgrave Macmillan, 2013), 116.
43. Betsy Isaacson, ‘Mind Control: How EEG Devices Will Read Your Brain Waves and Change Your World’, Huffington Post, 20 November 2014, accessed 20 December 2014, http://www.huffingtonpost.com/2012/11/20/mind-control-how-eeg-devices-read-brainwaves_n_2001431.html; ‘EPOC Headset’, Emotiv, http://emotiv.com/store/epoc-detail/; ‘Biosensor Innovation to Power Breakthrough Wearable Technologies Today and Tomorrow’, NeuroSky, http://neurosky.com/.
44. Samantha Payne, ‘Stockholm: Members of Epicenter Workspace Are Using Microchip Implants to Open Doors’, International Business Times, 31 January 2015, accessed 9 August 2015, http://www.ibtimes.co.uk/stockholm-office-workers-epicenter-implanted-microchips-pay-theirlunch-1486045.
45. Meika Loe, The Rise of Viagra: How the Little Blue Pill Changed Sex in America (New York: New York University Press, 2004).
46. Brian Morgan, ‘Saints and Sinners: Sir Harold Gillies’, Bulletin of the Royal College of Surgeons of England 95:6 (2013), 204–5; Donald W. Buck II, ‘A Link to Gillies: One Surgeon’s Quest to Uncover His Surgical Roots’, Annals of Plastic Surgery 68:1 (2012), 1–4.
47. Paolo Santoni-Rugio, A History of Plastic Surgery (Berlin, Heidelberg: Springer, 2007); P. Niclas Broer, Steven M. Levine and Sabrina Juran, ‘Plastic Surgery: Quo Vadis? Current Trends and Future Projections of Aesthetic Plastic Surgical Procedures in the United States’, Plastic and Reconstructive Surgery 133:3 (2014), 293e–302e.
48. Holly Firfer, ‘How Far Will Couples Go to Conceive?’, CNN, 17 June 2004, accessed 3 May 2015, http://edition.cnn.com/2004/HEALTH/03/12/infertility.treatment/index.html?iref=allsearch.
49. Rowena Mason and Hannah Devlin, ‘MPs Vote in Favour of “Three-Person Embryo” Law’, Guardian, 3 February 2015, accessed 3 May 2015, http://www.theguardian.com/science/2015/feb/03/mps-vote-favour-three-person-embryo-law.
50. Lionel S. Smith and Mark D. E. Fellowes, ‘Towards a Lawn without Grass: The Journey of the Imperfect Lawn and Its Analogues’, Studies in the History of Gardens & Designed Landscape 33:3 (2013), 158–9; John Dixon Hunt and Peter Willis (eds), The Genius of the Place: The English Landscape Garden 1620–1820, 5th edn (Cambridge, MA: MIT Press, 2000), 1–45; Anne Helmriech, The English Garden and National Identity: The Competing Styles of Garden Design 1870–1914 (Cambridge: Cambridge University Press, 2002), 1–6.
51. Robert J. Lake, ‘Social Class, Etiquette and Behavioral Restraint in British Lawn Tennis’, International Journal of the History of Sport 28:6 (2011), 876–94; Beatriz Colomina, ‘The Lawn at War: 1941–1961’, in The American Lawn, ed. Georges Teyssot (New York: Princeton Architectural Press, 1999), 135–53; Virginia Scott Jenkins, The Lawn: History of an American Obsession (Washington: Smithsonian Institution, 1994).
2 The Anthropocene
1. ‘Canis lupus’, IUCN Red List of Threatened Species, accessed 20 December 2014, http://www.iucnredlist.org/details/3746/1; ‘Fact Sheet: Gray Wolf’, Defenders of Wildlife, accessed 20 December 2014, http://www.defenders.org/gray-wolf/basic-facts; ‘Companion Animals’, IFAH, accessed 20 December 2014, http://www.ifaheurope.org/companion-animals/about-pets.html; ‘Global Review 2013’, World Animal Protection, accessed 20 December 2014, https://www.worldanimalprotection.us.org/sites/default/files/us_files/global_review_2013_0.pdf.
2. Anthony D. Barnosky, ‘Megafauna Biomass Tradeoff as a Driver of Quaternary and Future Extinctions’, PNAS 105:1 (2008), 11543–8; for wolves and lions: William J. Ripple et al., ‘Status and Ecological Effects of the World’s Largest Carnivores’, Science 343:6167 (2014), 151; according to Dr Stanley Coren there are about 500 million dogs in the world: Stanley Coren, ‘How Many Dogs Are There in the World?’, Psychology Today, 19 September 2012, accessed 20 December 2014, http://www.psychologytoday.com/blog/canine-corner/201209/how-manydogs-are-there-in-the-world; for the number of cats, see: Nicholas Wade, ‘DNA Traces 5 Matriarchs of 600 Million Domestic Cats’, New York Times, 29 June 2007, accessed 20 December 2014, http://www.nytimes.com/2007/06/29/health/29iht-cats.1.6406020.html; for the African buffalo, see: ‘Syncerus caffer’, IUCN Red List of Threatened Species, accessed 20 December 2014, http://www.iucnredlist.org/details/21251/0; for cattle population, see: David Cottle and Lewis Kahn (eds), Beef Cattle Production and Trade (Collingwood: Csiro, 2014), 66; for the number of chickens, see: ‘Live Animals’, Food and Agriculture Organization of the United Nations: Statistical Division, accessed 20 December 2014, http://faostat3.fao.org/browse/Q/QA/E; for the number of chimpanzees, see: ‘Pan troglodytes’, IUCN Red List of Threatened Species, accessed 20 December 2014, http://www.iucnredlist.org/details/15933/0.
3. ‘Living Planet Report 2014’, WWF Global, accessed 20 December 2014, http://wwf.panda.org/about_our_earth/all_publications/living_planet_report/.
4. Richard Inger et al., ‘Common European Birds Are Declining Rapidly While Less Abundant Species’ Numbers Are Rising’, Ecology Letters 18:1 (2014), 28–36; ‘Live Animals’, Food and Agriculture Organization of the United Nations, accessed 20 December 2014, http://faostat.fao.org/site/573/default.aspx#ancor.
5. Simon L. Lewis and Mark A. Maslin, ‘Defining the Anthropocene’, Nature 519 (2015), 171–80.
6. Timothy F. Flannery, The Future Eaters: An Ecological History of the Australasian Lands and Peoples (Port Melbourne: Reed Books Australia, 1994); Anthony D. Barnosky et al., ‘Assessing the Causes of Late Pleistocene Extinctions on the Continents’, Science 306:5693 (2004), 70–5; Barry W. Brook and David M. J. S. Bowman, ‘The Uncertain Blitzkrieg of Pleistocene Megafauna’, Journal of Biogeography 31:4 (2004), 517–23; Gifford H. Miller et al., ‘Ecosystem Collapse in Pleistocene Australia and a Human Role in Megafaunal Extinction’, Science 309:5732 (2005), 287–90; Richard G. Roberts et al., ‘New Ages for the Last Australian Megafauna: Continent Wide Extinction about 46,000 Years Ago’, Science 292:5523 (2001), 1888–92; Stephen Wroe and Judith Field, ‘A Review of the Evidence for a Human Role in the Extinction of Australian Megafauna and an Alternative Explanation’, Quaternary Science Reviews 25:21–2 (2006), 2692–703; Barry W. Brooks et al., ‘Would the Australian Megafauna Have Become Extinct if Humans Had Never Colonised the Continent? Comments on “A Review of the Evidence for a Human Role in the Extinction of Australian Megafauna and an Alternative Explanation” by S. Wroe and J. Field’, Quaternary Science Reviews 26:3–4 (2007), 560–4; Chris S. M. Turney et al., ‘Late-Surviving Megafauna in Tasmania, Australia, Implicate Human Involvement in their Extinction’, PNAS 105:34 (2008), 12150–3; John Alroy, ‘A Multispecies Overkill Simulation of the End-Pleistocene Megafaunal Mass Extinction’, Science 292:5523 (2001), 1893–6; J. F. O’Connell and J. Allen, ‘Pre-LGM Sahul (Australia–New Guinea) and the Archaeology of Early Modern Humans’, in Rethinking the Human Evolution: New Behavioral and Biological Perspectives on the Origin and Dispersal of Modern Humans, ed. Paul Mellars (Cambridge: McDonald Institute for Archaeological Research, 2007), 400–1.
7. Graham Harvey, Animism: Respecting the Living World (Kent Town: Wakefield Press, 2005); Rane Willerslev, Soul Hunters: Hunting, Animism and Personhood Among the Siberian Yukaghirs (Berkeley: University of California Press, 2007); Elina Helander-Renvall, ‘Animism, Personhood and the Nature of Reality: Sami Perspectives’, Polar Record 46:1 (2010), 44–56; Istvan Praet, ‘Animal Conceptions in Animism and Conservation’, in Routledge Handbook of Human–Animal Studies, ed. Susan McHaugh and Garry Marvin (New York: Routledge, 2014), 154–67; Nurit Bird-David, ‘Animism Revisited: Personhood, Environment, and Relational Epistemology’, Current Anthropology 40 (1999), s67–91; N. Bird-David, ‘Animistic Epistemology: Why Some Hunter-Gatherers Do Not Depict Animals’, Ethnos 71:1 (2006), 33–50.
8. Danny Naveh, ‘Changes in the Perception of Animals and Plants with the Shift to Agricultural Life: What Can Be Learnt from the Nayaka Case, a Hunter-Gatherer Society from the Rain Forests of Southern India?’ [in Hebrew], Animals and Society, 52 (2015), 7–8.
9. Howard N. Wallace, ‘The Eden Narrative’, Harvard Semitic Monographs 32 (1985), 147–81.
10. David Adams Leeming and Margaret Adams Leeming, Encyclopedia of Creation Myths (Santa Barbara: ABC-CLIO, 1994), 18; Sam D. Gill, Storytracking: Texts, Stories, and Histories in Central Australia (Oxford: Oxford University Press, 1998); Emily Miller Bonney, ‘Disarming the Snake Goddess: A Reconsideration of the Faience Figures from the Temple Repositories at Knossos’, Journal of Mediterranean Archaeology 24:2 (2011), 171–90; David Leeming, The Oxford Companion to World Mythology (Oxford and New York: Oxford University Press, 2005), 350.
11. Jerome H. Barkow, Leda Cosmides and John Tooby (eds), The Adapted Mind: Evolutionary Psychology and the Generation of Culture (Oxford: Oxford University Press, 1992); Richard W. Bloom and Nancy Dess (eds), Evolutionary Psychology and Violence: A Primer for Policymakers and Public Policy Advocates (Westport: Praeger, 2003); Charles Crawford and Catherine Salmon (eds), Evolutionary Psychology, Public Policy and Personal Decisions (New Jersey: Lawrence Erlbaum Associates, 2008); Patrick McNamara and David Trumbull, An Evolutionary Psychology of Leader–Follower Relations (New York: Nova Science, 2007); Joseph P. Forgas, Martie G. Haselton and William von Hippel (eds), Evolution and the Social Mind: Evolutionary Psychology and Social Cognition (New York: Psychology Press, 2011).
12. S. Held, M. Mendl, C. Devereux and R. W. Byrne, ‘Social Tactics of Pigs in a Competitive Foraging Task: the “Informed Forager” Paradigm’, Animal Behaviour 59:3 (2000), 569–76; S. Held, M. Mendl, C. Devereux and R. W. Byrne, ‘Studies in Social Cognition: from Primates to Pigs’, Animal Welfare 10 (2001), s209–17; H. B. Graves, ‘Behavior and Ecology of Wild and Feral Swine (Sus scrofa)’, Journal of Animal Science 58:2 (1984), 482–92; A. Stolba and D. G. M. Wood-Gush, ‘The Behaviour of Pigs in a Semi-Natural Environment’, Animal Production 48:2 (1989), 419–25; M. Spinka, ‘Behaviour in Pigs’, in The Ethology of Domestic Animals, 2nd edn, ed. P. Jensen, (Wallingford, UK: CAB International, 2009), 177–91; P. Jensen and D. G. M. Wood-Gush, ‘Social Interactions in a Group of Free-Ranging Sows’, Applied Animal Behaviour Science 12 (1984), 327–37; E. T. Gieling, R. E. Nordquist and F. J. van der Staay, ‘Assessing Learning and Memory in Pigs’, Animal Cognition 14 (2011), 151–73.
13. I. Horrell and J. Hodgson, ‘The Bases of Sow–Piglet Identification. 2. Cues Used by Piglets to Identify their Dam and Home Pen’, Applied Animal Behavior Science, 33 (1992), 329–43; D. M. Weary and D. Fraser, ‘Calling by Domestic Piglets: Reliable Signals of Need?’, Animal Behaviour 50:4 (1995), 1047–55; H. H. Kristensen et al., ‘The Use of Olfactory and Other Cues for Social Recognition by Juvenile Pigs’, Applied Animal Behaviour Science 72 (2001), 321–33.
14. M. Helft, ‘Pig Video Arcades Critique Life in the Pen’, Wired, 6 June 1997, http://archive.wired.com/science/discoveries/news/1997/06/4302, retrieved 27 January 2016.
15. Humane Society of the United States, ‘An HSUS Report: Welfare Issues with Gestation Crates for Pregnant Sows’, February 2013, http://www.humanesociety.org/assets/pdfs/farm/HSUS-Report-on-Gestation-Crates-for-Pregnant-Sows.pdf, retrieved 27 January 2016.
16. Turnbull and Solms, Brain and the Inner World, 90–2.
17. David Harel, Algorithmics: The Spirit of Computers, 3rd edn [in Hebrew] (Tel Aviv: Open University of Israel, 2001), 4–6; David Berlinski, The Advent of the Algorithm: The 300-Year Journey from an Idea to the Computer (San Diego: Harcourt, 2000); Hartley Rogers Jr, Theory of Recursive Functions and Effective Computability, 3rd edn (Cambridge, MA, and London: MIT Press, 1992), 1–5; Andreas Blass and Yuri Gurevich, ‘Algorithms: A Quest for Absolute Definitions’, Bulletin of European Association for Theoretical Computer Science 81 (2003), 195–225.
18. Daniel Kahneman, Thinking, Fast and Slow (New York: Farrar, Straus & Giroux, 2011); Dan Ariely, Predictably Irrational (New York: Harper, 2009).
19. Justin Gregg, Are Dolphins Really Smart? The Mammal Behind the Myth (Oxford: Oxford University Press, 2013), 81–7; Jaak Panksepp, ‘Affective Consciousness: Core Emotional Feelings in Animals and Humans’, Consciousness and Cognition 14:1 (2005), 30–80.
20. A. S. Fleming, D. H. O’Day and G. W. Kraemer, ‘Neurobiology of Mother–Infant Interactions: Experience and Central Nervous System Plasticity Across Development and Generations’, Neuroscience and Biobehavioral Reviews 23:5 (1999), 673–85; K. D. Broad, J. P. Curley and E. B. Keverne, ‘Mother–Infant Bonding and the Evolution of Mammalian Relationship’, Philosophical Transactions of the Royal Society B 361:1476 (2006), 2199–214; Kazutaka Mogi, Miho Nagasawa and Takefumi Kikusui, ‘Developmental Consequences and Biological Significance of Mother–Infant Bonding’, Progress in Neuro-Psychopharmacology and Biological Psychiatry 35:5 (2011), 1232–41; Shota Okabe et al., ‘The Importance of Mother–Infant Communication for Social Bond Formation in Mammals’, Animal Science Journal 83:6 (2012), 446–52.
21. Jean O’Malley Halley, Boundaries of Touch: Parenting and Adult–Child Intimacy (Urbana: University of Illinois Press, 2007), 50–1; Ann Taylor Allen, Feminism and Motherhood in Western Europe, 1890–1970: The Maternal Dilemma (New York: Palgrave Macmillan, 2005), 190.
22. Lucille C. Birnbaum, ‘Behaviorism in the 1920s’, American Quarterly 7:1 (1955), 18.
23. US Department of Labor (1929), ‘Infant Care’, Washington: United States Government Printing Office, http://www.mchlibrary.info/history/chbu/3121–1929.pdf.
24. Harry Harlow and Robert Zimmermann, ‘Affectional Responses in the Infant Monkey’, Science 130:3373 (1959), 421–32; Harry Harlow, ‘The Nature of Love’, American Psychologist 13 (1958), 673–85; Laurens D. Young et al., ‘Early Stress and Later Response to Separation in Rhesus Monkeys’, American Journal of Psychiatry 130:4 (1973), 400–5; K. D. Broad, J. P. Curley and E. B. Keverne, ‘Mother–Infant Bonding and the Evolution of Mammalian Social Relationships’, Philosophical Transactions of the Royal Society B 361:1476 (2006), 2199–214; Florent Pittet et al., ‘Effects of Maternal Experience on Fearfulness and Maternal Behavior in a Precocial Bird’, Animal Behavior 85:4 (2013), 797–805.
25. Jacques Cauvin, The Birth of the Gods and the Origins of Agriculture (Cambridge: Cambridge University Press, 2000); Tim Ingord, ‘From Trust to Domination: An Alternative History of Human–Animal Relations’, in Animals and Human Society: Changing Perspectives, ed. Aubrey Manning and James Serpell (New York: Routledge, 2002), 1–22; Roberta Kalechofsky, ‘Hierarchy, Kinship and Responsibility’, in A Communion of Subjects: Animals in Religion, Science and Ethics, ed. Kimberley Patton and Paul Waldau (New York: Columbia University Press, 2006), 91–102; Nerissa Russell, Social Zooarchaeology: Humans and Animals in Prehistory (Cambridge: Cambridge University Press, 2012), 207–58; Margo DeMello, Animals and Society: An Introduction to Human–Animal Studies (New York: University of Columbia Press, 2012).
26. Olivia Lang, ‘Hindu Sacrifice of 250,000 Animals Begins’, Guardian, 24 November 2009, accessed 21 December 2014, http://www.theguardian.com/world/2009/nov/24/hindu-sacrifice-gadhimai-festival-nepal.
27. Benjamin R. Foster (ed.), The Epic of Gilgamesh (New York, London: W. W. Norton, 2001), 90.
28. Noah J. Cohen, Tsa’ar Ba’ale Hayim: Prevention of Cruelty to Animals: Its Bases, Development and Legislation in Hebrew Literature (Jerusalem and New York: Feldheim Publishers, 1976); Roberta Kalechofsky, Judaism and Animal Rights: Classical and Contemporary Responses (Marblehead: Micah Publications, 1992); Dan Cohen-Sherbok, ‘Hope for the Animal Kingdom: A Jewish Vision’, in A Communion of Subjects: Animals in Religion, Science and Ethics, ed. Kimberley Patton and Paul Waldau (New York: Columbia University Press, 2006), 81–90; Ze’ev Levi, ‘Ethical Issues of Animal Welfare in Jewish Thought’, in Judaism and Environmental Ethics: A Reader, ed. Martin D. Yaffe (Plymouth: Lexington, 2001), 321–32; Norm Phelps, The Dominion of Love: Animal Rights According to the Bible (New York: Lantern Books, 2002); David Sears, The Vision of Eden: Animal Welfare and Vegetarianism in Jewish Law Mysticism (Spring Valley: Orot, 2003); Nosson Slifkin, Man and Beast: Our Relationships with Animals in Jewish Law and Thought (New York: Lambda, 2006).
29. Talmud Bavli, Bava Metzia, 85:71.
30. Christopher Chapple, Nonviolence to Animals, Earth and Self in Asian Traditions (New York: State University of New York Press, 1993); Panchor Prime, Hinduism and Ecology: Seeds of Truth (London: Cassell, 1992); Christopher Key Chapple, ‘The Living Cosmos of Jainism: A Traditional Science Grounded in Environmental Ethics’, Daedalus 130:4 (2001), 207–24; Norm Phelps, The Great Compassion: Buddhism and Animal Rights (New York: Lantern Books, 2004); Damien Keown, Buddhist Ethics: A Very Short Introduction (Oxford: Oxford University Press, 2005), ch. 3; Kimberley Patton and Paul Waldau (eds), A Communion of Subjects: Animals in Religion, Science and Ethics (New York: Columbia University Press, 2006), esp. 179–250; Pragati Sahni, Environmental Ethics in Buddhism: A Virtues Approach (New York: Routledge, 2008); Lisa Kemmerer and Anthony J. Nocella II (eds), Call to Compassion: Reflections on Animal Advocacy from the World’s Religions (New York: Lantern, 2011), esp. 15–103; Lisa Kemmerer, Animals and World Religions (Oxford: Oxford University Press, 2012), esp. 56–126; Irina Aristarkhova, ‘Thou Shall Not Harm All Living Beings: Feminism, Jainism and Animals’, Hypatia 27:3 (2012), 636–50; Eva de Clercq, ‘Karman and Compassion: Animals in the Jain Universal History’, Religions of South Asia 7 (2013), 141–57.
31. Naveh, ‘Changes in the Perception of Animals and Plants’, 11.
3 The Human Spark
1. ‘Evolution, Creationism, Intelligent Design’, Gallup, accessed 20 December 2014, http://www.gallup.com/poll/21814/evolution-creationism-intelligentdesign.aspx; Frank Newport, ‘In US, 46 per cent Hold Creationist View of Human Origins’, Gallup, 1 June 2012, accessed 21 December 2014, http://www.gallup.com/poll/155003/hold-creationist-view-human-origins.aspx.
2. Gregg, Are Dolphins Really Smart?, 82–3.
3. Stanislas Dehaene, Consciousness and the Brain: Deciphering How the Brain Codes Our Thoughts (New York: Viking, 2014); Steven Pinker, How the Mind Works (New York: W. W. Norton, 1997).
4. Dehaene, Consciousness and the Brain.
5. Pundits may point to Gödel’s incompleteness theorem, according to which no system of mathematical axioms can prove all arithmetic truths. There will always be some true statements that cannot be proven within the system. In popular literature this theorem is sometimes hijacked to account for the existence of mind. Allegedly, minds are needed to deal with such unprovable truths. However, it is far from obvious why living beings need to engage with such arcane mathematical truths in order to survive and reproduce. In fact, the vast majority of our conscious decisions do not involve such issues at all.
6. Christopher Steiner, Automate This: How Algorithms Came to Rule Our World (New York: Penguin, 2012), 215; Tom Vanderbilt, ‘Let the Robot Drive: The Autonomous Car of the Future Is Here’, Wired, 20 January 2012, accessed 21 December 2014, http://www.wired.com/2012/01/ff_autonomouscars/all/; Chris Urmson, ‘The Self-Driving Car Logs More Miles on New Wheels’, Google Official Blog, 7 August 2012, accessed 23 December 2014, http://googleblog.blogspot.hu/2012/08/the-self-driving-car-logs-more-miles-on.html; Matt Richtel and Conor Dougherty, ‘Google’s Driverless Cars Run into Problem: Cars with Drivers’, New York Times, 1 September 2015, accessed 2 September 2015, http://www.nytimes.com/2015/09/02/technology/personaltech/google-says-its-not-the-driverless-cars-fault-its-other-drivers.html?_r=1.
7. Dehaene, Consciousness and the Brain.
8. Ibid., ch. 7.
9. ‘The Cambridge Declaration on Consciousness’, 7 July 2012, accessed 21 December 2014, https://web.archive.org/web/20131109230457/http://fcmconference.org/img/CambridgeDeclarationOnConsciousness.pdf.
10. John F. Cyran, Rita J. Valentino and Irwin Lucki, ‘Assessing Substrates Underlying the Behavioral Effects of Antidepressants Using the Modified Rat Forced Swimming Test’, Neuroscience and Behavioral Reviews 29:4–5 (2005), 569–74; Benoit Petit-Demoulière, Frank Chenu and Michel Bourin, ‘Forced Swimming Test in Mice: A Review of Antidepressant Activity’, Psychopharmacology 177:3 (2005), 245–55; Leda S. B. Garcia et al., ‘Acute Administration of Ketamine Induces Antidepressant-like Effects in the Forced Swimming Test and Increases BDNF Levels in the Rat Hippocampus’, Progress in Neuro-Psychopharmacology and Biological Psychiatry 32:1 (2008), 140–4; John F. Cryan, Cedric Mombereau and Annick Vassout, ‘The Tail Suspension Test as a Model for Assessing Antidepressant Activity: Review of Pharmacological and Genetic Studies in Mice’, Neuroscience and Behavioral Reviews 29:4–5 (2005), 571–625; James J. Crowley, Julie A. Blendy and Irwin Lucki, ‘Strain-dependent Antidepressant-like Effects of Citalopram in the Mouse Tail Suspension Test’, Psychopharmacology 183:2 (2005), 257–64; Juan C. Brenes, Michael Padilla and Jaime Fornaguera, ‘A Detailed Analysis of Open-Field Habituation and Behavioral and Neurochemical Antidepressant-like Effects in Postweaning Enriched Rats’, Behavioral Brain Research 197:1 (2009), 125–37; Juan Carlos Brenes Sáenz, Odir Rodríguez Villagra and Jaime Fornaguera Trías, ‘Factor Analysis of Forced Swimming Test, Sucrose Preference Test and Open Field Test on Enriched, Social and Isolated Reared Rats’, Behavioral Brain Research 169:1 (2006), 57–65.
11. Marc Bekoff, ‘Observations of Scent-Marking and Discriminating Self from Others by a Domestic Dog (Canis familiaris): Tales of Displaced Yellow Snow’, Behavioral Processes 55:2 (2011), 75–9.
12. For different levels of self-consciousness, see: Gregg, Are Dolphins Really Smart?, 59–66.
13. Carolyn R. Raby et al., ‘Planning for the Future by Western Scrub Jays’, Nature 445:7130 (2007), 919–21.
14. Michael Balter, ‘Stone-Throwing Chimp is Back – and This Time It’s Personal’, Science, 9 May 2012, accessed 21 December 2014, http://news.sciencemag.org/2012/05/stone-throwing-chimp-back-and-time-itspersonal; Sara J. Shettleworth, ‘Clever Animals and Killjoy Explanations in Comparative Psychology’, Trends in Cognitive Sciences 14:11 (2010), 477–81.
15. Gregg, Are Dolphins Really Smart?; Nicola S. Clayton, Timothy J. Bussey and Anthony Dickinson, ‘Can Animals Recall the Past and Plan for the Future?’, Nature Reviews Neuroscience 4:8 (2003), 685–91; William A. Roberts, ‘Are Animals Stuck in Time?’, Psychological Bulletin 128:3 (2002), 473–89; Endel Tulving, ‘Episodic Memory and Autonoesis: Uniquely Human?’, in The Missing Link in Cognition: Evolution of Self-Knowing Consciousness, ed. Herbert S. Terrace and Janet Metcalfe (Oxford: Oxford University Press), 3–56; Mariam Naqshbandi and William A. Roberts, ‘Anticipation of Future Events in Squirrel Monkeys (Saimiri sciureus) and Rats (Rattus norvegicus): Tests of the Bischof–Kohler Hypothesis’, Journal of Comparative Psychology 120:4 (2006), 345–57.
16. I. B. A. Bartal, J. Decety and P. Mason, ‘Empathy and Pro-Social Behavior in Rats’, Science 334:6061 (2011), 1427–30; Gregg, Are Dolphins Really Smart?, 89.
17. Christopher B. Ruff, Erik Trinkaus and Trenton W. Holliday, ‘Body Mass and Encephalization in Pleistocene Homo’, Nature 387:6629 (1997), 173–6; Maciej Henneberg and Maryna Steyn, ‘Trends in Cranial Capacity and Cranial Index in Subsaharan Africa During the Holocene’, American Journal of Human Biology 5:4 (1993), 473–9; Drew H. Bailey and David C. Geary, ‘Hominid Brain Evolution: Testing Climatic, Ecological, and Social Competition Models’, Human Nature 20:1 (2009), 67–79; Daniel J. Wescott and Richard L. Jantz, ‘Assessing Craniofacial Secular Change in American Blacks and Whites Using Geometric Morphometry’, in Modern Morphometrics in Physical Anthropology: Developments in Primatology: Progress and Prospects, ed. Dennis E. Slice (New York: Plenum Publishers, 2005), 231–45.
18. See also Edward O. Wilson, The Social Conquest of the Earth (New York: Liveright, 2012).
19. Cyril Edwin Black (ed.), The Transformation of Russian Society: Aspects of Social Change since 1861 (Cambridge, MA: Harvard University Press, 1970), 279.
20. NAEMI09, ‘Nicolae Ceaus¸escu LAST SPEECH (english subtitles) part 1 of 2’, 22 April 2010, accessed 21 December 2014, http://www.youtube.com/watch?v=wWIbCtz_Xwk.
21. Tom Gallagher, Theft of a Nation: Romania since Communism (London: Hurst, 2005).
22. Robin Dunbar, Grooming, Gossip, and the Evolution of Language (Cambridge, MA: Harvard University Press, 1998).
23. TVP University, ‘Capuchin Monkeys Reject Unequal Pay’, 15 December 2012, accessed 21 December 2014, http://www.youtube.com/watch?v=lKhAd0Tyny0.
24. Quoted in Christopher Duffy, Military Experience in the Age of Reason (London: Routledge, 2005), 98–9.
25. Serhii Ploghy, The Last Empire: The Final Days of the Soviet Union (London: Oneworld, 2014), 309.
4 The Storytellers
1. Fekri A. Hassan, ‘Holocene Lakes and Prehistoric Settlements of the Western Fayum, Egypt’, Journal of Archaeological Science 13:5 (1986), 393–504; Gunther Garbrecht, ‘Water Storage (Lake Moeris) in the Fayum Depression, Legend or Reality?’, Irrigation and Drainage Systems 1:3 (1987), 143–57; Gunther Garbrecht, ‘Historical Water Storage for Irrigation in the Fayum Depression (Egypt)’, Irrigation and Drainage Systems 10:1 (1996), 47–76.
2. Yehuda Bauer, A History of the Holocaust (Danbur: Franklin Watts, 2001), 249.
3. Jean C. Oi, State and Peasant in Contemporary China: The Political Economy of Village Government (Berkeley: University of California Press, 1989), 91; Jasper Becker, Hungry Ghosts: China’s Secret Famine (London: John Murray, 1996); Frank Dikkoter, Mao’s Great Famine: The History of China’s Most Devastating Catastrophe, 1958–62 (London: Bloomsbury, 2010).
4. Martin Meredith, The Fate of Africa: From the Hopes of Freedom to the Heart of Despair: A History of Fifty Years of Independence (New York: Public Affairs, 2006); Sven Rydenfelt, ‘Lessons from Socialist Tanzania’, The Freeman 36:9 (1986); David Blair, ‘Africa in a Nutshell’, Telegraph, 10 May 2006, accessed 22 December 2014, http://blogs.telegraph.co.uk/news/davidblair/3631941/Africa_in_a_nutshell/.
5. Roland Anthony Oliver, Africa since 1800, 5th edn (Cambridge: Cambridge University Press, 2005), 100–23; David van Reybrouck, Congo: The Epic History of a People (New York: HarperCollins, 2014), 58–9.
6. Ben Wilbrink, ‘Assessment in Historical Perspective’, Studies in Educational Evaluation 23:1 (1997), 31–48.
7. M. C. Lemon, Philosophy of History (London and New York: Routledge, 2003), 28–44; Siep Stuurman, ‘Herodotus and Sima Qian: History and the Anthropological Turn in Ancient Greece and Han China’, Journal of World History 19:1 (2008), 1–40.
8. William Kelly Simpson, The Literature of Ancient Egypt (Yale: Yale University Press, 1973), 332–3.
5 The Odd Couple
1. C. Scott Dixon, Protestants: A History from Wittenberg to Pennsylvania, 1517–1740 (Chichester, UK: Wiley-Blackwell, 2010), 15; Peter W. Williams, America’s Religions: From Their Origins to the Twenty-First Century (Urbana: University of Illinois Press, 2008), 82.
2. Glenn Hausfater and Sarah Blaffer (eds), Infanticide: Comparative and Evolutionary Perspectives (New York: Aldine, 1984), 449; Valeria Alia, Names and Nunavut: Culture and Identity in the Inuit Homeland (New York: Berghahn Books, 2007), 23; Lewis Petrinovich, Human Evolution, Reproduction and Morality (Cambridge, MA: MIT Press, 1998), 256; Richard A. Posner, Sex and Reason (Cambridge, MA: Harvard University Press, 1992), 289.
3. Ronald K. Delph, ‘Valla Grammaticus, Agostino Steuco, and the Donation of Constantine’, Journal of the History of Ideas 57:1 (1996), 55–77; Joseph M. Levine, ‘Reginald Pecock and Lorenzo Valla on the Donation of Constantine’, Studies in the Renaissance 20 (1973), 118–43.
4. Gabriele Boccaccini, Roots of Rabbinic Judaism (Cambridge: Eerdmans, 2002); Shaye J. D. Cohen, From the Maccabees to the Mishnah, 2nd edn (Louisville: Westminster John Knox Press, 2006), 153–7; Lee M. McDonald and James A. Sanders (eds), The Canon Debate (Peabody: Hendrickson, 2002), 4.
5. Sam Harris, The Moral Landscape: How Science Can Determine Human Values (New York: Free Press, 2010).
6 The Modern Covenant
1. Gerald S. Wilkinson, ‘The Social Organization of the Common Vampire Bat II’, Behavioral Ecology and Sociobiology 17:2 (1985), 123–34; Gerald S. Wilkinson, ‘Reciprocal Food Sharing in the Vampire Bat’, Nature 308:5955 (1984), 181–4; Raul Flores Crespo et al., ‘Foraging Behavior of the Common Vampire Bat Related to Moonlight’, Journal of Mammalogy 53:2 (1972), 366–8.
2. Goh Chin Lian, ‘Admin Service Pay: Pensions Removed, National Bonus to Replace GDP Bonus’, Straits Times, 8 April 2013, retrieved 9 February 2016, http://www.straitstimes.com/singapore/admin-service-pay-pensionsremoved-national-bonus-to-replace-gdp-bonus.
3. Edward Wong, ‘In China, Breathing Becomes a Childhood Risk’, New York Times, 22 April 2013, accessed 22 December 2014, http://www.nytimes.com/2013/04/23/world/asia/pollution-is-radically-changing-childhood-in-chinas-cities.html?pagewanted=all&_r=0; Barbara Demick, ‘China Entrepreneurs Cash in on Air Pollution’, Los Angeles Times, 2 February 2013, accessed 22 December 2014, http://articles.latimes.com/2013/feb/02/world/la-fg-china-pollution-20130203.
4. IPCC, Climate Change 2014: Mitigation of Climate Change – Summary for Policymakers, ed. Ottmar Edenhofer et al. (Cambridge and New York: Cambridge University Press, 2014), 6.
5. UNEP, The Emissions Gap Report 2012 (Nairobi: UNEP, 2012); IEA, Energy Policies of IEA Countries: The United States (Paris: IEA, 2008).
6. For a detailed discussion see Ha-Joon Chang, 23 Things They Don’t Tell You About Capitalism (New York: Bloomsbury Press, 2010).
7 The Humanist Revolution
1. Jean-Jacques Rousseau, Émile, ou de l’éducation (Paris, 1967), 348.
2. ‘Journalists Syndicate Says Charlie Hebdo Cartoons “Hurt Feelings”, Washington Okays’, Egypt Independent, 14 January 2015, accessed 12 August 2015, http://www.egyptindependent.com/news/journalists-syndicate-sayscharlie-hebdo-cartoons-percentE2percent80percent98hurt-feelings-washington-okays.
3. Naomi Darom, ‘Evolution on Steroids’, Haaretz, 13 June 2014.
4. Walter Horace Bruford, The German Tradition of Self-Cultivation: ‘Bildung’ from Humboldt to Thomas Mann (London and New York: Cambridge University Press, 1975), 24, 25.
5. ‘All-Time 100 TV Shows: Survivor’, Time, 6 September 2007, retrieved 12 August 2015, http://time.com/3103831/survivor/.
6. Phil Klay, Redeployment (London: Canongate, 2015), 170.
7. Yuval Noah Harari, The Ultimate Experience: Battlefield Revelations and the Making of Modern War Culture, 1450–2000 (Houndmills: Palgrave Macmillan, 2008); Yuval Noah Harari, ‘Armchairs, Coffee and Authority: Eye-witnesses and Flesh-witnesses Speak about War, 1100–2000’, Journal of Military History 74:1 (January 2010), 53–78.
8. ‘Angela Merkel Attacked over Crying Refugee Girl’, BBC, 17 July 2015, accessed 12 August 2015, http://www.bbc.com/news/world-europe-33555619.
9. Laurence Housman, War Letters of Fallen Englishmen (Philadelphia: University of Pennsylvania State, 2002), 159.
10. Mark Bowden, Black Hawk Down: The Story of Modern Warfare (New York: New American Library, 2001), 301–2.
11. Adolf Hitler, Mein Kampf, trans. Ralph Manheim (Boston: Houghton Mifflin, 1943), 165.
12. Evan Osnos, Age of Ambition: Chasing Fortune, Truth and Faith in the New China (London: Vintage, 2014), 95.
13. Mark Harrison (ed), The Economics of World War II: Six Great Powers in International Comparison (Cambridge: Cambridge University Press, 1998), 3–10; John Ellis, World War II: A Statistical Survey (New York: Facts on File, 1993); I. C. B. Dear (ed.) The Oxford Companion to the Second World War (Oxford: Oxford University Press, 1995).
14. Donna Haraway, ‘A Cyborg Manifesto: Science, Technology, and Socialist-Feminism in the Late Twentieth Century’, in Simians, Cyborgs and Women: The Reinvention of Nature, ed. Donna Haraway (New York: Routledge, 1991), 149–81.
8 The Time Bomb in the Laboratory
1. For a detailed discussion see Michael S. Gazzaniga, Who’s in Charge?: Free Will and the Science of the Brain (New York: Ecco, 2011).
2. Chun Siong Soon et al., ‘Unconscious Determinants of Free Decisions in the Human Brain’, Nature Neuroscience 11:5 (2008), 543–5. See also Daniel Wegner, The Illusion of Conscious Will (Cambridge, MA: MIT Press, 2002); Benjamin Libet, ‘Unconscious Cerebral Initiative and the Role of Conscious Will in Voluntary Action’, Behavioral and Brain Sciences 8 (1985), 529–66.
3. Sanjiv K. Talwar et al., ‘Rat Navigation Guided by Remote Control’, Nature 417:6884 (2002), 37–8; Ben Harder, ‘Scientists “Drive” Rats by Remote Control’, National Geographic, 1 May 2012, accessed 22 December 2014, http://news.nationalgeographic.com/news/2002/05/0501_020501_roborats.html; Tom Clarke, ‘Here Come the Ratbots: Desire Drives Remote-Controlled Rodents’, Nature, 2 May 2002, accessed 22 December 2014, http://www.nature.com/news/1998/020429/full/news020429-9.html; Duncan Graham-Rowe, ‘“Robo-rat” Controlled by Brain Electrodes’, New Scientist, 1 May 2002, accessed 22 December 2014, http://www.newscientist.com/article/dn2237-roborat-controlled-bybrain-electrodes.html#.UwOPiNrNtkQ.
4. http://fusion.net/story/204316/darpa-is-implanting-chips-in-soldiers-brains/; http://www.theverge.com/2014/5/28/5758018/darpateams-begin-work-on-tiny-brain-implant-to-treat-ptsd.
5. Smadar Reisfeld, ‘Outside of the Cuckoo’s Nest’, Haaretz, 6 March 2015.
6. Dan Hurley, ‘US Military Leads Quest for Futuristic Ways to Boost IQ’, Newsweek, 5 March 2014, http://www.newsweek.com/2014/03/14/us-military-leads-quest-futuristic-ways-boost-iq-247945.html, accessed 9 January 2015; Human Effectiveness Directorate, http://www.wpafb.af.mil/afrl/rh/index.asp; R. Andy McKinley et al., ‘Acceleration of Image Analyst Training with Transcranial Direct Current Stimulation’, Behavioral Neuroscience 127:6 (2013), 936–46; Jeremy T. Nelson et al., ‘Enhancing Vigilance in Operators with Prefrontal Cortex Transcranial Direct Current Stimulation (TDCS)’, NeuroImage 85 (2014), 909–17; Melissa Scheldrup et al., ‘Transcranial Direct Current Stimulation Facilitates Cognitive Multi-Task Performance Differentially Depending on Anode Location and Subtask’, Frontiers in Human Neuroscience 8 (2014); Oliver Burkeman, ‘Can I Increase my Brain Power?’, Guardian, 4 January 2014, http://www.theguardian.com/science/2014/jan/04/can-i-increase-my-brain-power,accessed 9 January2016; Heather Kelly, ‘Wearable Tech to Hack Your Brain’, CNN, 23 October 2014, http://www.cnn.com/2014/10/22/tech/innovation/brain-stimulation-tech/, accessed 9 January 2016.
7. Sally Adee, ‘Zap Your Brain into the Zone: Fast Track to Pure Focus’, New Scientist, 6 February 2012, accessed 22 December 2014, http://www.newscientist.com/article/mg21328501.600-zap-your-brain-intothe-zone-fast-track-to-pure-focus.html. See also: R. Douglas Fields, ‘Amping Up Brain Function: Transcranial Stimulation Shows Promise in Speeding Up Learning’, Scientific American, 25 November 2011, accessed 22 December 2014, http://www.scientificamerican.com/article/amping-up-brain-function.
8. Sally Adee, ‘How Electrical Brain Stimulation Can Change the Way We Think’, The Week, 30 March 2012, accessed 22 December 2014, http://theweek.com/article/index/226196/how-electrical-brain-stimulation-can-change-the-way-we-think/2.
9. E. Bianconi et al., ‘An Estimation of the Number of Cells in the Human Body’, Annals of Human Biology 40:6 (2013), 463–71.
10. Oliver Sacks, The Man Who Mistook His Wife for a Hat (London: Picador, 1985), 73–5.
11. Joseph E. LeDoux, Donald H. Wilson and Michael S. Gazzaniga, ‘A Divided Mind: Observations on the Conscious Properties of the Separated Hemispheres’, Annals of Neurology 2:5 (1977), 417–21. See also: D. Galin, ‘Implications for Psychiatry of Left and Right Cerebral Specialization: A Neurophysiological Context for Unconscious Processes’, Archives of General Psychiatry 31:4 (1974), 572–83; R. W. Sperry, M. S. Gazzaniga and J. E. Bogen, ‘Interhemispheric Relationships: The Neocortical Commisures: Syndromes of Hemisphere Disconnection’, in Handbook of Clinical Neurology, ed. P. J. Vinken and G. W. Bruyn (Amsterdam: North Holland Publishing Co., 1969), vol. 4.
12. Michael S. Gazzaniga, The Bisected Brain (New York: Appleton-Century-Crofts, 1970); Gazzaniga, Who’s in Charge?; Carl Senior, Tamara Russell and Michael S. Gazzaniga, Methods in Mind (Cambridge, MA: MIT Press, 2006); David Wolman, ‘The Split Brain: A Tale of Two Halves’, Nature 483 (14 March 2012), 260–3.
13. Galin, ‘Implications for Psychiatry of Left and Right Cerebral Specialization’, 573–4.
14. Sally P. Springer and Georg Deutsch, Left Brain, Right Brain, 3rd edn (New York: W. H. Freeman, 1989), 32–6.
15. Kahneman, Thinking, Fast and Slow, 377–410. See also Gazzaniga, Who’s in Charge?, ch. 3.
16. Eran Chajut et al., ‘In Pain Thou Shalt Bring Forth Children: The Peak-and-End Rule in Recall of Labor Pain’, Psychological Science 25:12 (2014), 2266–71.
17. Ulla Waldenström, ‘Women’s Memory of Childbirth at Two Months and One Year after the Birth’, Birth 30:4 (2003), 248–54; Ulla Waldenström, ‘Why Do Some Women Change Their Opinion about Childbirth over Time?’, Birth 31:2 (2004), 102–7.
18. Gazzaniga, Who’s in Charge?, ch. 3.
19. Jorge Luis Borges, Collected Fictions, trans. Andrew Hurley (New York: Penguin Books, 1999), 308–9. For a Spanish version see: Jorge Luis Borges, ‘Un problema’, in Obras completas, vol. 3 (Buenos Aires: Emece Editores, 1968–9), 29–30.
20. Mark Thompson, The White War: Life and Death on the Italian Front, 1915–1919 (New York: Basic Books, 2009).
9 The Great Decoupling
1. F. M. Anderson (ed.), The Constitutions and Other Select Documents Illustrative of the History of France: 1789–1907, 2nd edn (Minneapolis: H. W. Wilson, 1908), 184–5; Alan Forrest, ‘L’armée de l’an II: la levée en masse et la création d’un mythe républicain’, Annales historiques de la Révolution française 335 (2004), 111–30.
2. Morris Edmund Spears (ed.), World War Issues and Ideals: Readings in Contemporary History and Literature (Boston and New York: Ginn and Company, 1918), 242. The most significant recent study, widely quoted by both proponents and opponents, attempts to prove that soldiers of democracy fight better: Dan Reiter and Allan C. Stam, Democracies at War (Princeton: Princeton University Press, 2002).
3. Doris Stevens, Jailed for Freedom (New York: Boni and Liveright, 1920), 290. See also Susan R. Grayzel, Women and the First World War (Harlow: Longman, 2002), 101–6; Christine Bolt, The Women’s Movements in the United States and Britain from the 1790s to the 1920s (Amherst: University of Massachusetts Press, 1993), 236–76; Birgitta Bader-Zaar, ‘Women’s Suffrage and War: World War I and Political Reform in a Comparative Perspective’, in Suffrage, Gender and Citizenship: International Perspectives on Parliamentary Reforms, ed. Irma Sulkunen, Seija-Leena Nevala-Nurmi and Pirjo Markkola (Newcastle upon Tyne: Cambridge Scholars Publishing, 2009), 193–218.
4. Matt Richtel and Conor Dougherty, ‘Google’s Driverless Cars Run into Problem: Cars with Drivers’, New York Times, 1 September 2015, accessed 2 September 2015, http://www.nytimes.com/2015/09/02/technology/personaltech/google-says-its-not-the-driverless-cars-fault-its-otherdrivers.html?_r=1; Shawn DuBravac, Digital Destiny: How the New Age of Data Will Transform the Way We Work, Live and Communicate (Washington DC: Regnery Publishing, 2015), 127–56.
5. Bradley Hope, ‘Lawsuit Against Exchanges Over “Unfair Advantage” for High-Frequency Traders Dismissed’, Wall Street Journal, 29 April 2015, accessed 6 October 2015, http://www.wsj.com/articles/lawsuit-against-exchanges-over-unfair-advantage-for-high-frequency-tradersdismissed-1430326045; David Levine, ‘High-Frequency Trading Machines Favored Over Humans by CME Group, Lawsuit Claims’, Huffington Post, 26 June 2012, accessed 6 October 2015, http://www.huffingtonpost.com/2012/06/26/high-frequency-trading-lawsuit_n_1625648.html; Lu Wang, Whitney Kisling and Eric Lam, ‘Fake Post Erasing $136 Billion Shows Markets Need Humans’, Bloomberg, 23 April 2013, accessed 22 December 2014, http://www.bloomberg.com/news/2013-04-23/fake-report-erasing-136-billion-shows-market-s-fragility.html; Matthew Philips, ‘How the Robots Lost: High-Frequency Trading’s Rise and Fall’, Bloomberg Businessweek, 6 June 2013, accessed 22 December 2014, http://www.businessweek.com/printer/articles/123468-how-the-robots-losthigh-frequency-tradings-rise-and-fall; Steiner, Automate This, 2–5, 11–52; Luke Dormehl, The Formula: How Algorithms Solve All Our Problems – And Create More (London: Penguin, 2014), 223.
6. Jordan Weissmann, ‘iLawyer: What Happens when Computers Replace Attorneys?’, Atlantic, 19 June 2012, accessed 22 December 2014, http://www.theatlantic.com/business/archive/2012/06/ilawyer-whathappens-when-computers-replace-attorneys/258688; John Markoff, ‘Armies of Expensive Lawyers, Replaced by Cheaper Software’, New York Times, 4 March 2011, accessed 22 December 2014, http://www.nytimes.com/2011/03/05/science/05legal.html?pagewanted=all&_r=0; Adi Narayan, ‘The fMRI Brain Scan: A Better Lie Detector?’, Time, 20 July 2009, accessed 22 December 2014, http://content.time.com/time/health/article/0,8599,1911546-2,00.html; Elena Rusconi and Timothy Mitchener-Nissen, ‘Prospects of Functional Magnetic Resonance Imaging as Lie Detector’, Frontiers in Human Neuroscience 7:54 (2013); Steiner, Automate This, 217; Dormehl, The Formula, 229.
7. B. P. Woolf, Building Intelligent Interactive Tutors: Student-centered Strategies for Revolutionizing E-learning (Burlington: Morgan Kaufmann, 2010); Annie Murphy Paul, ‘The Machines Are Taking Over’, New York Times, 14 September 2012, accessed 22 December 2014, http://www.nytimes.com/2012/09/16/magazine/how-computerized-tutors-arelearning-to-teach-humans.html?_r=0; P. J. Munoz-Merino, C. D. Kloos and M. Munoz-Organero, ‘Enhancement of Student Learning Through the Use of a Hinting Computer e-Learning System and Comparison With Human Teachers’, IEEE Transactions on Education 54:1 (2011), 164–7; Mindojo, accessed 14 July 2015, http://mindojo.com/.
8. Steiner, Automate This, 146–62; Ian Steadman, ‘IBM’s Watson Is Better at Diagnosing Cancer than Human Doctors’, Wired, 11 February 2013, accessed 22 December 2014, http://www.wired.co.uk/news/archive/2013-02/11/ibm-watson-medical-doctor; ‘Watson Is Helping Doctors Fight Cancer’, IBM, accessed 22 December 2014, http://www-03.ibm.com/innovation/us/watson/watson_in_healthcare.shtml; Vinod Khosla, ‘Technology Will Replace 80 per cent of What Doctors Do’, Fortune, 4 December 2012, accessed 22 December 2014, http://tech.fortune.cnn.com/2012/12/04/technology-doctors-khosla; Ezra Klein, ‘How Robots Will Replace Doctors’, Washington Post, 10 January 2011, accessed 22 December 2014, http://www.washingtonpost.com/blogs/wonkblog/post/how-robots-willreplace-doctors/2011/08/25/gIQASA17AL_blog.html.
9. Tzezana, The Guide to the Future, 62–4.
10. Steiner, Automate This, 155.
11. http://www.mattersight.com.
12. Steiner, Automate This, 178–82; Dormehl, The Formula, 21–4; Shana Lebowitz, ‘Every Time You Dial into These Call Centers, Your Personality Is Being Silently Assessed’, Business Insider, 3 September 2015, retrieved 31 January 2016, http://www.businessinsider.com/how-mattersight-uses-personality-science-2015-9.
13. Rebecca Morelle, ‘Google Machine Learns to Master Video Games’, BBC, 25 February 2015, accessed 12 August 2015, http://www.bbc.com/news/science-environment-31623427; Elizabeth Lopatto, ‘Google’s AI Can Learn to Play Video Games’, The Verge, 25 February 2015, accessed 12 August 2015, http://www.theverge.com/2015/2/25/8108399/google-ai-deepmind-videogames; Volodymyr Mnih et al., ‘Human-Level Control through Deep Reinforcement Learning’, Nature, 26 February 2015, accessed 12 August 2015, http://www.nature.com/nature/journal/v518/n7540/full/nature14236.html.
14. Michael Lewis, Moneyball: The Art of Winning an Unfair Game (New York: W. W. Norton, 2003). Also see the 2011 film Moneyball, directed by Bennett Miller and starring Brad Pitt as Billy Beane.
15. Frank Levy and Richard Murnane, The New Division of Labor: How Computers Are Creating the Next Job Market (Princeton: Princeton University Press, 2004); Dormehl, The Formula, 225–6.
16. Tom Simonite, ‘When Your Boss Is an Uber Algorithm’, MIT Technology Review, 1 December 2015, retrieved 4 February 2016, https://www.technologyreview.com/s/543946/when-your-boss-is-an-uberalgorithm/.
17. Simon Sharwood, ‘Software “Appointed to Board” of Venture Capital Firm’, The Register, 18 May 2014, accessed 12 August 2015, http://www.theregister.co.uk/2014/05/18/software_appointed_to_board_of_venture_capital_firm/; John Bates, ‘I’m the Chairman of the Board’, Huffington Post, 6 April 2014, accessed 12 August 2015, http://www.huffingtonpost.com/john-bates/im-the-chairman-of-the-bo_b_5440591.html; Colm Gorey, ‘I’m Afraid I Can’t Invest in That, Dave: AI Appointed to VC Funding Board’, Silicon Republic, 15 May 2014, accessed 12 August 2015, https://www.siliconrepublic.com/discovery/2014/05/15/im-afraid-i-cant-invest-in-that-dave-ai-appointed-to-vc-funding-board.
18. Steiner, Automate This, 89–101; D. H. Cope, Comes the Fiery Night: 2,000 Haiku by Man and Machine (Santa Cruz: Create Space, 2011). See also: Dormehl, The Formula, 174–80, 195–8, 200–2, 216–20; Steiner, Automate This, 75–89.
19. Carl Benedikt Frey and Michael A. Osborne, ‘The Future of Employment: How Susceptible Are Jobs to Computerisation?’, 17 September 2013, accessed 12 August 2015, http://www.oxfordmartin.ox.ac.uk/downloads/academic/The_Future_of_Employment.pdf.
20. E. Brynjolfsson and A. McAffee, Race Against the Machine: How the Digital Revolution Is Accelerating Innovation, Driving Productivity, and Irreversibly Transforming Employment and the Economy (Lexington: Digital Frontier Press, 2011).
21. Nick Bostrom, Superintelligence: Paths, Dangers, Strategies (Oxford: Oxford University Press, 2014).
22. Ido Efrati, ‘Researchers Conducted a Successful Experiment with an “Artificial Pancreas” Connected to an iPhone’ [in Hebrew], Haaretz, 17 June 2014, accessed 23 December 2014, http://www.haaretz.co.il/news/health/1.2350956. Moshe Phillip et al., ‘Nocturnal Glucose Control with an Artificial Pancreas at a Diabetes Camp’, New England Journal of Medicine 368:9 (2013), 824–33; ‘Artificial Pancreas Controlled by iPhone Shows Promise in Diabetes Trial’, Today, 17 June 2014, accessed 22 December 2014, http://www.todayonline.com/world/artificial-pancreas-controlled-iphone-shows-promise-diabetestrial?singlepage=true.
23. Dormehl, The Formula, 7–16.
24. Martha Mendoza, ‘Google Develops Contact Lens Glucose Monitor’, Yahoo News, 17 January 2014, accessed 12 August 2015, http://news.yahoo.com/google-develops-contact-lens-glucose-monitor-000147894.html; Mark Scott, ‘Novartis Joins with Google to Develop Contact Lens That Monitors Blood Sugar’, New York Times, 15 July 2014, accessed 12 August 2015, http://www.nytimes.com/2014/07/16/business/international/novartisjoins-with-google-to-develop-contact-lens-to-monitor-blood-sugar.html?_r=0; Rachel Barclay, ‘Google Scientists Create Contact Lens to Measure Blood Sugar Level in Tears’, Healthline, 23 January 2014, accessed 12 August 2015, http://www.healthline.com/health-news/diabetes-googledevelops-glucose-monitoring-contact-lens-012314.
25. Quantified Self, http://quantifiedself.com/; Dormehl, The Formula, 11–16.
26. Dormehl, The Formula, 91–5; Bedpost, http://bedposted.com.
27. Dormehl, The Formula, 53–9.
28. Angelina Jolie, ‘My Medical Choice’, New York Times, 14 May 2013, accessed 22 December 2014, http://www.nytimes.com/2013/05/14/opinion/my-medical-choice.html.
29. ‘Google Flu Trends’, http://www.google.org/flutrends/about/how.html; Jeremy Ginsberg et al., ‘Detecting Influenza Epidemics Using Search Engine Query Data’, Nature 457:7232 (2008), 1012–14; Declan Butler, ‘When Google Got Flu Wrong’, Nature, 13 February 2013, accessed 22 December 2014, http://www.nature.com/news/when-google-got-flu-wrong-1.12413; Miguel Helft, ‘Google Uses Searches to Track Flu’s Spread’, New York Times, 11 November 2008, accessed 22 December 2014, http://msl1.mit.edu/furdlog/docs/nytimes/2008-11-11_nytimes_google_influenza.pdf; Samantha Cook et al., ‘Assessing Google Flu Trends Performance in the United States during the 2009 Influenza Virus A (H1N1) Pandemic’, PLOS ONE, 19 August 2011, accessed 22 December 2014, http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0023610; Jeffrey Shaman et al., ‘Real-Time Influenza Forecasts during the 2012–2013 Season’, Nature, 23 April 2013, accessed 24 December 2014, http://www.nature.com/ncomms/2013/131203/ncomms3837/full/ncomms3837.html.
30. Alistair Barr, ‘Google’s New Moonshot Project: The Human Body’, Wall Street Journal, 24 July 2014, accessed 22 December 2014, http://www.wsj.com/articles/google-to-collect-data-to-define-healthyhuman-1406246214; Nick Summers, ‘Google Announces Google Fit Platform Preview for Developers’, Next Web, 25 June 2014, accessed 22 December 2014, http://thenextweb.com/insider/2014/06/25/google-launches-google-fit-platform-preview-developers/.
31. Dormehl, The Formula, 72–80.
32. Wu Youyou, Michal Kosinski and David Stillwell, ‘Computer-Based Personality Judgements Are More Accurate Than Those Made by Humans’, PNAS 112:4 (2015), 1036–40.
33. For oracles, agents and sovereigns see: Bostrom, Superintelligence.
34. https://www.waze.com/.
35. Dormehl, The Formula, 206.
36. World Bank, World Development Indicators 2012 (Washington DC: World Bank, 2012), 72, http://data.worldbank.org/sites/default/files/wdi-2012-ebook.pdf.
37. Larry Elliott, ‘Richest 62 People as Wealthy as Half of World’s Population, Says Oxfam’, Guardian, 18 January 2016, retrieved 9 February 2016, http://www.theguardian.com/business/2016/jan/18/richest-62-billionaireswealthy-half-world-population-combined; Tami Luhby, ‘The 62 Richest People Have as Much Wealth as Half the World’, CNN Money, 18 January 2016, retrieved 9 February 2016, http://money.cnn.com/2016/01/17/news/economy/oxfam-wealth/.
10 The Ocean of Consciousness
1. Joseph Henrich, Steven J. Heine and Ara Norenzayan, ‘The Weirdest People in the World’, Behavioral and Brain Sciences 33 (2010), 61–135.
2. Benny Shanon, Antipodes of the Mind: Charting the Phenomenology of the Ayahuasca Experience (Oxford: Oxford University Press, 2002).
3. Thomas Nagel, ‘What Is It Like to Be a Bat?’, Philosophical Review 83:4 (1974), 435–50.
4. Michael J. Noad et al., ‘Cultural Revolution in Whale Songs’, Nature 408:6812 (2000), 537; Nina Eriksen et al., ‘Cultural Change in the Songs of Humpback Whales (Megaptera novaeangliae) from Tonga’, Behavior 142:3 (2005), 305–28; E. C. M. Parsons, A. J. Wright and M. A. Gore, ‘The Nature of Humpback Whale (Megaptera novaeangliae) Song’, Journal of Marine Animals and Their Ecology 1:1 (2008), 22–31.
5. C. Bushdid et al., ‘Human Can Discriminate More Than 1 Trillion Olfactory Stimuli’, Science 343:6177 (2014), 1370–2; Peter A. Brennan and Frank Zufall, ‘Pheromonal Communication in Vertebrates’, Nature 444:7117 (2006), 308–15; Jianzhi Zhang and David M. Webb, ‘Evolutionary Deterioration of the Vomeronasal Pheromone Transduction Pathway in Catarrhine Primates’, Proceedings of the National Academy of Sciences 100:14 (2003), 8337–41; Bettina Beer, ‘Smell, Person, Space and Memory’, in Experiencing New Worlds, ed. Jurg Wassmann and Katharina Stockhaus (New York: Berghahn Books, 2007), 187–200; Niclas Burenhult and Majid Asifa, ‘Olfaction in Asian Ideology and Language’, Sense and Society 6:1 (2011), 19–29; Constance Classen, David Howes and Anthony Synnott, Aroma: The Cultural History of Smell (London: Routledge, 1994); Amy Pei-jung Lee, ‘Reduplication and Odor in Four Formosan Languages’, Language and Linguistics 11:1 (2010), 99–126; Walter E. A. van Beek, ‘The Dirty Smith: Smell as a Social Frontier among the Kapsiki/Higi of North Cameroon and North-Eastern Nigeria’, Africa 62:1 (1992), 38–58; Ewelina Wnuk and Asifa Majid, ‘Revisiting the Limits of Language: The Odor Lexicon of Maniq’, Cognition 131 (2014), 125–38. Yet some scholars connect the decline of human olfactory powers to much more ancient evolutionary processes. See: Yoav Gilad et al., ‘Human Specific Loss of Olfactory Receptor Genes’, Proceedings of the National Academy of Sciences 100:6 (2003), 3324–7; Atushi Matsui, Yasuhiro Go and Yoshihito Niimura, ‘Degeneration of Olfactory Receptor Gene Repertories in Primates: No Direct Link to Full Trichromatic Vision’, Molecular Biology and Evolution 27:5 (2010), 1192–200; Graham M. Hughes, Emma C. Teeling and Desmond G. Higgins, ‘Loss of Olfactory Receptor Function in Hominid Evolution’, PLOS One 9:1 (2014), e84714.
6. Matthew Crawford, The World Beyond Your Head: How to Flourish in an Age of Distraction (London: Viking, 2015).
7. Turnbull and Solms, The Brain and the Inner World, 136–59; Kelly Bulkeley, Visions of the Night: Dreams, Religion and Psychology (New York: State University of New York Press, 1999); Andreas Mavrematis, Hypnogogia: The Unique State of Consciousness Between Wakefulness and Sleep (London: Routledge, 1987); Brigitte Holzinger, Stephen LaBerge and Lynn Levitan, ‘Psychophysiological Correlates of Lucid Dreaming’, American Psychological Association 16:2 (2006), 88–95; Watanabe Tsuneo, ‘Lucid Dreaming: Its Experimental Proof and Psychological Conditions’, Journal of International Society of Life Information Science 21:1 (2003), 159–62; Victor I. Spoormaker and Jan van den Bout, ‘Lucid Dreaming Treatment for Nightmares: A Pilot Study’, Psychotherapy and Psychosomatics 75:6 (2006), 389–94.
11 The Data Religion
1. See, for example, Kevin Kelly, What Technology Wants (New York: Viking Press, 2010); César Hidalgo, Why Information Grows: The Evolution of Order, from Atoms to Economies (New York: Basic Books, 2015); Howard Bloom, Global Brain: The Evolution of Mass Mind from the Big Bang to the 21st Century (Hoboken: Wiley, 2001); DuBravac, Digital Destiny.
2. Friedrich Hayek, ‘The Use of Knowledge in Society’, American Economic Review 35:4 (1945), 519–30.
3. Kiyohiko G. Nishimura, Imperfect Competition Differential Information and the Macro-foundations of Macro-economy (Oxford: Oxford University Press, 1992); Frank M. Machovec, Perfect Competition and the Transformation of Economics (London: Routledge, 2002); Frank V. Mastrianna, Basic Economics, 16th edn (Mason: South-Western, 2010), 78–89; Zhiwu Chen, ‘Freedom of Information and the Economic Future of Hong Kong’, HKCER Letters 74 (2003), http://www.hkrec.hku.hk/Letters/v74/zchen.htm; Randall Morck, Bernard Yeung and Wayne Yu, ‘The Information Content of Stock Markets: Why Do Emerging Markets Have Synchronous Stock Price Movements?’, Journal of Financial Economics 58:1 (2000), 215–60; Louis H. Ederington and Jae Ha Lee, ‘How Markets Process Information: News Releases and Volatility’, Journal of Finance 48:4 (1993), 1161–91; Mark L. Mitchell and J. Harold Mulherin, ‘The Impact of Public Information on the Stock Market’, Journal of Finance 49:3 (1994), 923–50; Jean-Jacques Laffont and Eric S. Maskin, ‘The Efficient Market Hypothesis and Insider Trading on the Stock Market’, Journal of Political Economy 98:1 (1990), 70–93; Steven R. Salbu, ‘Differentiated Perspectives on Insider Trading: The Effect of Paradigm Selection on Policy’, St John’s Law Review 66:2 (1992), 373–405.
4. Valery N. Soyfer, ‘New Light on the Lysenko Era’, Nature 339:6224 (1989), 415–20; Nils Roll-Hansen, ‘Wishful Science: The Persistence of T. D. Lysenko’s Agrobiology in the Politics of Science’, Osiris 23:1 (2008), 166–88.
5. William H. McNeill and J. R. McNeill, The Human Web: A Bird’s-Eye View of World History (New York: W. W. Norton, 2003).
6. Aaron Swartz, ‘Guerilla Open Access Manifesto’, July 2008, accessed 22 December 2014, https://ia700808.us.archive.org/17/items/GuerillaOpenAccessManifesto/Goamjuly2008.pdf; Sam Gustin, ‘Aaron Swartz, Tech Prodigy and Internet Activist, Is Dead at 26’, Time, 13 January 2013, accessed 22 December 2014, http://business.time.com/2013/01/13/tech-prodigy-and-internet-activistaaron-swartz-commits-suicide; Todd Leopold, ‘How Aaron Swartz Helped Build the Internet’, CNN, 15 January 2013, 22 December 2014, http://edition.cnn.com/2013/01/15/tech/web/aaron-swartzinternet/; Declan McCullagh, ‘Swartz Didn’t Face Prison until Feds Took Over Case, Report Says’, CNET, 25 January 2013, accessed 22 December 2014, http://news.cnet.com/8301-13578_3-57565927-38/swartzdidnt-face-prison-until-feds-took-over-case-report-says/.
7. John Sousanis, ‘World Vehicle Population Tops 1 Billion Units’, Wardsauto, 15 August 2011, accessed 3 December 2015, http://wardsauto.com/news-analysis/world-vehicle-population-tops-1-billion-units.
8. ‘No More Woof’, https://www.indiegogo.com/projects/no-more-woof.
I would like to express my gratitude to the following humans, animals and institutions:
To my teacher, Satya Narayan Goenka (1924–2013), who taught me the technique of Vipassana meditation, which has helped me to observe reality as it is, and to know the mind and the world better. I could not have written this book without the focus, peace and insight gained from practising Vipassana for the last fifteen years.
To the Israel Science Foundation, which helped fund this research project (grant number 26/09).
To the Hebrew University, and in particular to its department of history, my academic home; and to all my students over the years, who taught me so much through their questions, their answers and their silences.
To my research assistant, Idan Sherer, who devotedly handled whatever I threw his way, be it chimpanzees, Neanderthals or cyborgs. And to my other assistants, Ram Liran, Eyal Miller and Omri Shefer Raviv, who pitched in from time to time.
To Michal Shavit, my publisher at Penguin Random House in the UK, for taking a gamble, and for her unfailing commitment and support over many years; and to Ellie Steel, Suzanne Dean, Bethan Jones, Maria Garbutt-Lucero and their colleagues at Penguin Random House, for all their help.
To David Milner, who did a superb job editing the manuscript, saved me from many an embarrassing mistake, and reminded me that ‘delete’ is probably the most important key on the keyboard.
To Preena Gadher and Lija Kresowaty of Riot Communications, for helping to spread the word so efficiently.
To Jonathan Jao, my publisher at HarperCollins in New York, and to Claire Wachtel, my former publisher there, for their faith, encouragement and insight.
To Shmuel Rosner and Eran Zmora, for seeing the potential, and for their valuable feedback and advice.
To Deborah Harris, for helping with the vital breakthrough.
To Amos Avisar, Shilo de Ber, Tirza Eisenberg, Luke Matthews, Rami Rotholz and Oren Shriki, who read the manuscript carefully, and devoted much time and effort to correcting my mistakes and enabling me to see things from other perspectives.
To Yigal Borochovsky, who convinced me to go easy on God.
To Yoram Yovell, for his insights and for our walks together in the Eshta’ol forest.
To Ori Katz and Jay Pomeranz, who helped me get a better understanding of the capitalist system.
To Carmel Weismann, Joaquín Keller and Antoine Mazieres, for their thoughts about brains and minds.
To Benjamin Z. Keder, for planting and watering the seeds.
To Diego Olstein, for many years of warm friendship and calm guidance.
To Ehud Amir, Shuki Bruck, Miri Worzel, Guy Zaslavaki, Michal Cohen, Yossi Maurey, Amir Sumakai-Fink, Sarai Aharoni and Adi Ezra, who read selected parts of the manuscript and shared their ideas.
To Eilona Ariel, for being a gushing fountain of enthusiasm and a firm rock of refuge.
To my mother-in-law and accountant, Hannah Yahav, for keeping all the money balls in the air.
To my grandmother Fanny, my mother, Pnina, my sisters Liat and Einat, and to all my other family members and friends for their support and companionship.
To Chamba, Pengo and Chili, who offered a canine perspective on some of the main ideas and theories of this book.
And to my spouse and manager, Itzik, who already today functions as my Internet-of-All-Things.
The pagination of this electronic edition does not match the edition from which it was created. To locate a specific entry, please use your e-book reader’s search tools.
Entries in italics indicate photographs and illustrations.
Abdallah, Muhammad Ahmad bin (Mahdi) 272, 273
Abe, Shinzō 208
abortion 190, 191, 238
Adee, Sally 369
ADHD (attention deficit hyperactivity disorder) 40
aesthetics: humanist 236, 240; Middle Ages 230, 230–1
Afghanistan 19, 40, 101, 171, 356
Africa: AIDS crisis in 14; borders in 261, 355; Ebola outbreak in 11, 13, 204; Sapiens evolution in savannah of 342–3, 393–4
Agricultural Revolution: animal welfare and 77, 83, 368; Bible and 77, 156, 157; slavery and 96–7
AIDS 14, 19
algorithms: concept defined 372, 373, 118, 119, 122, 125, 126, 141, 372, 373, 178
Allen, Woody 29
AlphaGo 325
Alzheimer’s disease 24, 341
Amazon Corporation 348–9
Amenemhat III 161, 162, 175
Andersson, Professor Leif 233
animals: Agricultural Revolution and 77, 83, 368; as algorithms 72; humanism and 99, 233; inequality, reaction to 143; intersubjective web of meaning and 151; mass extinction of 233, 288
animist cultures 91, 92, 96, 97, 173
Annie (musical composition program) 329–30
Anthropocene 71–100
antibiotics 10, 12, 13, 23, 27, 99, 180, 268, 277, 353
antidepressants 40, 49, 123–5
Apple Corporation 16, 155, 377
art: medieval and humanist attitudes towards 230–2, 235; technology and 328–30
artificial intelligence 49; animal welfare and 120; humanism, threat to 50, 51; renders humans economically and militarily useless 50 see also algorithms; Dataism and under individual area of AI
artificial pancreas 335
Ashurbanipal of Assyria, King 68, 68
Associated Press 316
Auschwitz 259, 381
autonomous cars 115, 115, 163, 326, 390
Aztec Empire 8–9
Babylon 173, 311–2, 395–6
Bach, Johann Sebastian 363
Bariyapur, Nepal 92–3
bats: experience of the world 361–2, 363–4; lending and vampire 205–6
Beane, Billy 325–6
Bedpost 336
Beethoven, Ludwig van 255, 329; Fifth Symphony and value of experience 363, 393
Belavezha Accords, 1991 146, 146
Bentham, Jeremy 30, 32, 35
Berlin Conference, 1884 168
Berlin Wall, fall of 1989 134
Berry, Chuck 393; ‘Johnny B. Goode’ 363
Bible 46; animal kingdom and 386; evolution and 103; homosexuality and 196, 174; Old Testament 48, 76; power of shaping story 173, 174; source of authority 174; unique nature of humanity, promotes 76–8
biological poverty line 3–6
biotechnology 14, 44, 98, 178, 271, 275, 380, 401 see also individual biotech area
Bismarck, Otto von 31, 273
Black Death 6, 11, 12
Borges, Jorge Luis: ‘A Problem’ 301–2
Bostrom, Nick 331–2
Bowden, Mark: Black Hawk Down 257
bowhead whale song, spectrogram of 363, 363
brain: Agricultural Revolution and 160; artificial intelligence and 280, 280; biological engineering and 44; brain–computer interfaces 49, 55, 357, 364; consciousness and 117, 126; cyborg engineering and 45; Dataism and 373, 398, 400; free will and 37, 38, 41; self and 132, 133; transcranial stimulators and manipulation of 289–92; two hemispheres 293–6
brands 162
Brexit 380
Brezhnev, Leonid 275
Brin, Sergey 28, 341
Buddhism 182, 186, 188, 223, 361
Calico 24, 28
Cambodia 266
Cambridge Declaration on Consciousness, 2012 123
capitalism 28, 184, 207, 261, 313, 402 see also economics/economy
Caporreto, Battle of 1917 303
Catholic Church 148, 184; Donation of Constantine 194; economic and technological innovations and 276; marriage and 26; papal infallibility principle 148, 191, 199; Thirty Years War and 244, 245, 248; turns from creative into reactive force 276–7 see also Bible and Christianity
Ceauçescu, Nicolae 138
Charlie Hebdo 228
Château de Chambord, Loire Valley, France 62, 62
Chekhov Law 55
child mortality 10, 34, 175
childbirth, narration of 299, 299–300
China 1, 271; biotech and 341; Civil War 265; economic growth and 207, 208, 211; famine in 5, 5, 381; Great Wall of 49, 179; liberalism, challenge to 96; Taiping Rebellion, 1850–64 273; Three Gorges Dam, building of 163, 189, 197
Chinese river dolphin 189, 197, 401
Christianity: abortion and 190; animal welfare and 206; homosexuality and 22 see also Bible and Catholic Church
Chukwu 48
CIA 58, 160, 295–6
Clever Hans (horse) 129–31, 130
climate change 20, 152, 214, 382, 402
Clinton, Bill 58
Clovis, King of France 229, 229
Cognitive Revolution 156, 357, 384 Cold War 18, 34, 150, 207, 268, 377
cold water experiment (Kahneman) 343
colonoscopy study (Kahneman and Redelmeier) 297–9
Columbus, Christopher 198, 364, 385
Communism 5, 56, 58, 150, 182; cooperation and 139; Dataism and 374, 399, 401; economic growth and 207, 208, 209, 218, 220; liberalism, challenge to 182, 183, 184; Second World War and 265
computers: algorithms and see algorithms; brain–computer interfaces 49, 55, 289, 357, 364; consciousness and 107, 115, 120, 121; Dataism and 373, 380, 394–5
Confucius 46, 269, 397; Analects 271, 272
Congo 9, 10, 19, 168, 207, 393
consciousness: animal 120, 121, 357, 402; manufacturing new states of 399; positive psychology and 365; Problem of Other Minds 365; subjective experience and 357, 358–64
cooperation, intersubjective meaning and 144–52, 155–78; power of human 132–52, 155–78; revolution and 133–8; size of group and 138–44
Cope, David 328–30
creativity 29, 277, 320, 330, 331, 351
credit 203–6
Crusades 150, 191, 230, 242, 307
Csikszentmihalyi, Mihaly 365
customer-services departments 321–2
cyber warfare 17, 59, 311–2
Cyborg 2 (movie) 339
cyborg engineering 67, 312, 339
Cyrus, King of Persia 173
Daoism 182, 223
Darom, Naomi 233
Darwin, Charles: evolutionary theory of 254, 272, 377, 397; On the Origin of Species 272, 307, 372
data processing: Agricultural Revolution and 276; centralised and distributed (communism and capitalism) 114, 118; democracy, challenge to 394; life as 114, 118, 373, 402; stock exchange and 368; value of human experience and 393–5; writing and 157–60 see also algorithms and Dataism
Dataism 371, 373; birth of 373; criticism of 393; interpretation of history and 379, 390; religion of 386–90; value of experience and 393–5
Dawkins, Richard 307
de Grey, Aubrey 24, 25, 27
Deadline Corporation 335–6
death 21–9 see also immortality
Declaration of the Rights of Man and of the Citizen, The 310–1
Deep Blue 324, 324
Deep Knowledge Ventures 327
DeepMind 324–5
Dehaene, Stanislas 117
democracy: Dataism and 382, 385, 396, 397, 401; evolutionary humanism and 270; technological challenge to 308, 343–6
Dennett, Daniel 117
depression 36, 39, 40, 49, 54, 67, 289, 362, 369
Descartes, René 108
diabetes 15, 27–8
Diagnostic and Statistical Manual of Mental Disorders (DSM) 365
Dix, Otto 255; The War (Der Krieg) (1929–32) 246, 247, 248
DNA: in vitro fertilisation and 144, 341, 342, 352, 398; soul and 106
doctors, replacement by artificial intelligence of 317–21
Donation of Constantine 194
drones 290, 295, 311, 312, 312, 313
drugs: computer-assisted methods for research into 327; Ebola and 204; pharmacy automation and 320; psychiatric 49, 125
Dua-Khety 175–6
dualism 188
Duchamp, Marcel: Fountain 235, 235
Ebola 2, 11, 13, 204
economics/economy: benefits of growth in 383, 391, 394, 399, 401, 402; happiness and 30, 32, 33, 39; humanism and 236, 254, 271, 275, 313; immortality and 28; paradox of historical knowledge and 311, 313, 316, 331, 352, 354
education 233, 235, 236, 240, 249, 317, 354
Eguía, Francisco de 8
Egypt 1, 3, 68, 99, 142, 143, 171, 207; Lake Fayum engineering project 175, 179; life of peasant in ancient 175, 176; Revolution, 2011 138, 252; slavery in 96; Sudan and 272
Egyptian Journalists Syndicate 228
Einstein, Albert 103, 255
electromagnetic spectrum 358, 359
Eliot, Charles W. 311
EMI (Experiments in Musical Intelligence) 328–9
Engels, Friedrich 273
Enki 93, 157, 328
Epicenter, Stockholm 45
Epicurus 30, 33, 35, 41
epilepsy 293–4
Erdoğan, Recep Tayyip 208
eugenics 53, 55
European Union 82, 151, 160, 252, 380
evolution 43, 76, 78, 89, 111, 132, 141, 151, 204, 262, 284, 285, 299, 307, 314, 363, 365, 393, 397
evolutionary humanism 357
Facebook 46, 138, 346, 366, 392, 393, 397, 398
famine 19, 27, 33, 41, 56, 59, 166, 167, 180, 206, 210, 220, 355
famine, plague and war, end of 1–21
First World War, 1914–18 14, 16, 52, 246, 247, 248, 256, 311, 312
‘Flash Crash,’ 2010 316
fMRI scans 109, 119, 144, 160, 284, 317, 336, 339, 360
FOMO (Fear of Missing Out) 366
Foucault, Michel: The History of Sexuality 277–8
France: famine in, 1692–4 5; First World War and 9, 14, 16; founding myth of 229, 229; French Revolution 155, 310, 30, 31; Second World War and 164, 264–5
France, Anatole 53
Frederick the Great, King 142–3
free will 225, 232, 249, 306, 307, 308, 343
freedom of expression 209, 388
freedom of information 388, 389–90
Freudian psychology 88, 118, 225–6
Furuvik Zoo, Sweden 126–7
Future of Employment, The (Frey/Osborne) 330
Gandhi, Indira 266, 267
Gazzaniga, Professor Michael S. 297
GDH (gross domestic happiness) 32
GDP (gross domestic product) 30, 32, 34, 208, 264
genetic engineering 25, 41, 44, 48, 49, 50, 233, 276, 278, 288, 357, 364, 374
Germany 36; First World War and 14, 16, 246, 247, 248; migration crisis and 252; Second World War and 31
Gilgamesh epic 93
Gillies, Harold 52
global warming 20, 214, 382, 402
God: Agricultural Revolution and 90, 95, 77, 78, 386, 388, 391, 395, 398; death/immortality and 48; death of 68, 98, 222, 236, 263, 270; defining religion and 182, 183, 184, 185; evolutionary theory and 103; hides in small print of factual statements 191, 196; homosexuality and 196, 228, 278; humanism and 222, 223, 225, 226, 227, 228, 231, 243, 246, 250, 263, 270, 272, 273, 274, 276, 278, 307, 395, 396; intersubjective reality and 146, 180, 182, 183, 184, 185, 191, 196; Middle Ages, as source of meaning and authority in 224, 226, 307; Newton myth and 222, 228, 270, 356; Scientific Revolution and 116; war narratives and 243, 246
gods: Agricultural Revolution and theist 98, 181, 182, 30; humanism and 98; humans as (upgrade to Homo Deus) 21, 25, 50, 56, 98; intersubjective reality and 151, 155, 177, 327, 357; modern covenant and 97, 98; spirituality and 1, 2, 4, 7, 8, 19
Google 24, 28, 115, 115, 150, 157, 163, 277, 315, 324, 325, 326, 335, 346, 389, 397, 398; Google Baseline Study 340; Google Now 348; Google Ventures 24
Gorbachev, Mikhail 377
Götze, Mario 36, 63
Greece 30, 133, 173, 174, 242, 267, 270, 307
greenhouse gas emissions 216–7
Gregory the Great, Pope 230, 230
guilds 232
hackers 312, 316, 349, 398
Hadassah Hospital, Jerusalem 289
Hamlet (Shakespeare) 46, 200–1
HaNasi, Rabbi Yehuda 94
happiness 30–43
Haraway, Donna: ‘A Cyborg Manifesto’ 277–8
Harlow, Harry 89, 90
Harris, Sam 197
Hassabis, Dr Demis 324
Hattin, Battle of, 1187 147, 148
Hayek, Friedrich 374
healthcare 320, 353–4
Heine, Steven J. 359–60
helmets: attention 369; ‘mind-reading’ 45
Henrich, Joseph 43
Herodotus 173, 174
Hinduism 90, 182, 185, 188, 198, 207, 263, 270, 271, 272, 352, 386
Hitler, Adolf 182, 183, 357, 380
Holocaust 165, 259
Holocene 72
Holy Spirit 229, 229, 230, 230
Homo deus: Homo sapiens upgrade to 43–9, 356–71; techno-humanism and 356–71
Homo sapiens: conquer the world 69, 21, 43–9; immortality 21–9; loses control 281–402; problems with predicting history of 56–65
homosexuality 121, 196, 238, 277–8
Hong Xiuquan 273
Human Effectiveness Directorate, Ohio 290
humanism 199, 221; aesthetics and 230, 235, 235, 244, 221, 234, 234; education system and 233, 235, 235, 236; ethics 99; nationalism and 234, 234, 250–2; revolution, humanist 222–79; schism within 248–59; Scientific Revolution gives birth to 97–100; socialist see socialist humanism/socialism; techno-humanism 356–71; value of experience and 259–63; war narratives and 243–8, 244, 247, 255–8; wars of religion, 1914-1991 263–9
hunter-gatherers 34, 60, 90, 95, 96, 97, 98, 141, 142, 156, 163, 169, 175, 176, 270, 326, 360, 365, 366, 383
Hussein, Saddam 19, 311–2
IBM 318, 324, 335
Iliescu, Ion 137, 138
‘imagined orders’ 143–50 see also intersubjective meaning
immigration 250–2
immortality 30, 47, 51, 52, 56, 65, 66, 67, 139, 180, 270, 278, 355, 400
in vitro fertilisation x, 53–4
Inanna 157, 328
India: drought and famine in 3; economic growth in modern 266, 267; Hindu revival, 19th-century 272, 273, 275; hunter-gatherers in 96; liberalism and 266, 267; population growth rate 9
individualism: evolutionary theory and 104–5; liberal idea of undermined by twenty-firstcentury technology 332–50; self and 296–306, 303, 305
Industrial Revolution 57, 64, 276, 315, 322, 330, 379, 401
inequality 56, 264, 382, 402
intelligence: animal 81, 82, 100, 138; decoupling from consciousness 357, 402; definition of 100, 138; upgrading human 352, 357 see also techno-humanism; value of consciousness and 402
intelligent design 73, 103
internet: distribution of power 379, 389; Internet-of-All-Things 386, 387, 394, 395, 398, 400; rapid rise of 50–1
intersubjective meaning 180, 327, 357
Iraq 3, 17, 40, 277
Islam 19, 138, 189, 197, 206, 207, 208, 223, 228, 250, 263, 270, 271, 272, 273, 276, 277, 278, 356, 397; fundamentalist 19, 197, 228, 270, 271, 272, 277, 356 see also Muslims
Islamic State (IS) 277, 356
Isonzo battles, First World War 302–4, 303
Israel 48, 96, 251
Italy 264, 302–4, 303
Jainism 94–5
Jamestown, Virginia 300
Japan 30, 31, 33, 208, 248, 354, 355
Jefferson, Thomas 32, 193, 251, 284, 307
Jeopardy! (game show) 318, 318
Jesus Christ 155, 184, 188, 273, 276, 299
Jews/Judaism: ancient/biblical 94, 173, 174, 182, 194, 270, 396; animal welfare and 94; expulsions from early modern Europe 198, 199; Great Jewish Revolt (AD 70) 183
Jolie, Angelina 340, 352
Jones, Lieutenant Henry 256
Journal of Personality and Social Psychology 359–60
Joyce, James: Ulysses 242
JSTOR digital library 388–9
Jung, Carl 225–6
Kahneman, Daniel 296, 343
Kasparov, Garry 324, 324
Khmer Rouge 266
Khrushchev, Nikita 265, 275–6
Kurzweil, Ray 24, 25, 27; The Singularity Is Near 386
Kyoto protocol, 1997 216–7
Lake Fayum engineering project, Egypt 175, 179
Larson, Professor Steve 329
Law of the Jungle 14–21
lawns 59–65, 62, 63
lawyers, replacement by artificial intelligence of 316–7
Lea, Tom: That, 2,000 Yard Stare (1944) 246, 247, 248
Lee Sedol 325
Lenin, Vladimir 208, 253, 275, 380
Lenin Academy for Agricultural Sciences 376–7
Levy, Professor Frank 326
liberal humanism/liberalism 182, 249; contemporary alternatives to 313; free will and 306; humanism and see humanism; humanist wars of religion, 1914–1991 and 307; meaning of life and 306, 307; schism within humanism and 262, 393; victory of 267–8
life expectancy 5, 51
‘logic bombs’ (malicious software codes) 17
Louis XIV, King 4, 64, 229
lucid dreaming 366–7
Luther, Martin 277, 278
Luther King, Martin 265–6
Lysenko, Trofim 376–7
MAD (mutual assured destruction) 267
malaria 12, 19, 318
malnutrition 3, 5, 6, 10, 27, 56
Mao Zedong 27, 165, 167, 253, 261, 265, 380
Maris, Bill 24
marriage: artificial intelligence and 348; gay 277, 278; humanism and 277, 278, 293, 343, 369; life expectancy and 26
Marx, Karl/Marxism 57–8, 60, 184, 208, 249–50, 273–5; Communist Manifesto 218; Das Kapital 57, 276
Mattersight Corporation 321–2
Mazzini, Giuseppe 251–2
meaning of life 185, 224, 225, 343, 391–2
Memphis, Egypt 158–9
Mendes, Aristides de Sousa 164, 164–5
mental spectrum 365
Merkel, Angela 250–1
Mesopotamia 93
Mexico 11, 265
Michelangelo 27, 255; David 262
Microsoft 16, 157, 335; Band 335; Cortana 346–7
Mill, John Stuart 35
‘mind-reading’ helmet 45
Mindojo 317
MIT 326, 389
modern covenant 222
Modi, Narendra 207, 208
money: credit and 145, 146, 171, 177; invention of 157, 158, 357, 384–5; investment in growth 210–2
mother–infant bond 88–90
Mubarak, Hosni 138
Muhammad 189, 228, 272, 397
Murnane, Professor Richard 326
Museum of Islamic Art, Qatar 64
Muslims: Charlie Hebdo attack and 228; Crusades and 147, 148, 149, 150; economic growth, belief in 207; evaluating success of 175; evolution and 104; expulsions of from early modern Europe 198, 199; free will and 287; lawns and 64–5; LGBT community and 227–8 see also Islam
Mussolini, Benito 304
Myanmar 145, 207
Nagel, Thomas 362
nanotechnology 23, 25, 51, 98, 271, 349, 357
National Health Service, UK 339–40
National Salvation Front, Romania 137
NATO 266–7
Naveh, Danny 76, 96
Nayaka people 96
Nazism 182, 183, 249, 380, 382, 401
Ne Win, General 145
Neanderthals 50, 156, 263, 275, 361, 384
Nebuchadnezzar, King of Babylonia 173, 311–2
Nelson, Shawn 257
New York Times 311, 352, 375
New Zealand: Animal Welfare Amendment Act, 2015 123
Newton, Isaac 27, 144, 198
Nietzsche, Friedrich 236, 256, 270
non-organic beings 43, 44, 45–6 Norenzayan, Ara 359–60
Novartis 335
nuclear weapons 15, 17, 17, 132, 150, 163, 217, 267, 377
Nyerere, Julius 166
Oakland Athletics 325–6
Obama, President Barack 316, 381
obesity 18, 54
OncoFinder 327
Ottoman Empire 198, 208
‘Our Boys Didn’t Die in Vain’ syndrome 303
Page, Larry 28
paradox of knowledge 56–9
Paris Agreement, 2015 217
Pathway Pharmaceuticals 327
Petsuchos 161–2
Pfungst, Oskar 130
pharmacists 320
pigs, domesticated 90, 99, 101, 102, 233
Pinker, Steven 307
Pius IX, Pope 272–3
Pixie Scientific 335
plague/infectious disease 1–2, 6–14
political famines 4
politics: automation of 41; liberalism and 231, 234, 234, 236, 249N, 254; life expectancy and 27, 29; revolution and 133–8; speed of change in 58–9
pollution 20, 176, 346
poverty 19, 33, 56, 252, 253, 264, 354–5
Presley, Elvis 159, 159–60
Problem of Other Minds 120–1, 127–8
Protestant Reformation 199, 244–6, 245
psychology: evolutionary 118; humanism and 225–6, 253–4; positive 365–7
Putin, Vladimir 27, 350, 381
pygmy chimpanzees (bonobos) 139–40
Quantified Self movement 336
quantum physics 104, 171, 183, 236
Qur’an 170, 175, 271, 272
rats, laboratory 38, 39, 102, 121–5, 124, 128–9, 288–9
Redelmeier, Donald 297–9
relativity, theory of 103, 104, 171
religion: animals and 173; animist 91, 92, 96, 97, 173; challenge to liberalism 270; Dataism 68, 236, 263, 270; humanist ethic and 173; science, relationship with 99, 276; twenty-first century 178
revolutions 60, 155, 310, 312–3
Ritalin 39, 369
robo-rat 288–9
Roman Empire 99, 192, 193, 195, 242, 378
Romanian Revolution, 1989 139
Romeo and Juliet (Shakespeare) 225, 284, 307
Russian Revolution, 1917 137
Rwanda 15–6
Saarinen, Sharon 54
Saladin 147, 148, 149, 151–2
Sanders, Bernie 380
Santino (chimpanzee) 126–8
Saraswati, Dayananda 272, 273, 275
Scientific Revolution 213, 385
Scotland 4, 305, 305
Second World War, 1939–45 21, 34, 56, 116, 164, 255, 294
self: animal self-consciousness 348; free will and 225, 232, 249, 306, 307, 308, 343; life sciences undermine liberal idea of 173, 174; single authentic self, humanist idea of 253, 396; socialism and self-reflection 287; techno-humanism and 291
Seligman, Martin 365
Senusret III 161, 162
September 11 attacks, New York, 2001 19, 379
Shavan, Shlomi 336
Shedet, Egypt 161–2
Silicon Valley 24, 25, 269, 276, 356, 386
Sima Qian 173–4
Singapore 32, 208
slavery 96–7
smallpox 8–9, 10–1
Snayers, Pieter: Battle of White Mountain 248
Sobek 163, 171, 179–80
socialist humanism/socialism 258, 265, 266, 267, 268, 330, 356, 382
soul 29, 92, 129, 131, 133, 139, 147, 148, 149, 151, 160, 187, 190, 196, 231, 274, 284, 285, 287, 293, 328, 329, 386
South Korea 33, 152, 266, 268, 296, 325, 354, 401
Soviet Union: communism and 207, 209, 375, 375, 375, 136, 137, 146, 146, 268; economy and 207, 209, 375, 375, 265
Spanish Flu 11
Sperry, Professor Roger Wolcott 294
St Augustine 277, 278
Stalin, Joseph 27, 258, 350, 397
stock exchange 204, 211, 296, 316, 372–3, 374–5, 376
Stone Age 34, 60, 74, 80, 132, 155, 156, 157, 163, 176, 263
sub-normative mental spectrum 364, 365
subjective experience 80, 155, 180, 231, 239, 394, 399
Sudan 272, 273, 275
suicide rates 2, 15, 33
Sumerians 159, 328
Survivor (TV reality show) 242
Swartz, Aaron 388
Sylvester I, Pope 192
Syria 3, 19, 150, 171, 222, 277, 316
Taiping Rebellion, 1850–64 273
Talwar, Professor Sanjiv 288–9
techno-humanism: definition of 357; focus of psychological research and 358–64; human will and 368–71; upgrading of mind 364–71
technology: Dataism and see Dataism; inequality; and future 351–5; liberal idea of individual challenged by 332–50; renders humans economically and militarily useless 309–32; techno-humanism and see techno-humanism
Tekmira 204
terrorism 14, 228, 290, 292, 314
Tesla 115, 326
Thatcher, Margaret 58, 377
Thiel, Peter 24–5
Third Man, The (movie) 255–6
Thirty Years War, 1618–48 244–5
Three Gorges Dam 163, 189, 197
Thucydides 173, 174
Toyota 232, 296, 327
transcranial stimulators 45, 369
Tree of Knowledge, biblical 76–7, 77, 97–8
Trump, Donald 380
tuberculosis 9, 19, 23, 24
Turing, Alan 121, 372
Turing Machine 372
Turing Test 121
23andMe 341
Twitter 48, 138, 316, 393
Uganda 196
United States: Dataism and 379; energy usage and happiness levels in 34; evolution, suspicion of within 103; Kyoto protocol, 1997 and 249N; nuclear weapons and 163; pursuit of happiness and 174; value of life in compared to Afghan life 101; Vietnam War and 266, 267; well-being levels 34
Universal Declaration of Human Rights 21, 24
Urban II, Pope 230
Uruk 156–7
US Army 369
Valla, Lorenzo 193
Valle Giulia, Battle of 1968 265
vampire bats 205–6
Vedas 170, 182, 272
Vietnam War, 1954–75 58, 246, 266, 267
virtual-reality worlds 331
VITAL 327
Voyager golden record 260–1
Waal, Frans de 141–2
Walter, Jean-Jacques: Gustav Adolph of Sweden at the Battle of Breitenfeld (1631) 244, 244, 245, 246, 248
war 1–3, 14–9; humanism and narratives of 243–8, 244, 247, 255–8
Warsaw Pact 266–7
Watson, John 90
Watson (artificial intelligence system) 318, 335
Waze 346–7
web of meaning 144–50
WEIRD (Western, educated, industrialised, rich and democratic) countries, psychology research focus on 358, 364, 365
West Africa: Ebola and 11, 13, 204
‘What Is It Like to Be a Bat?’ (Nagel) 362
White House lawn 62, 62, 64
Wilson, Woodrow 311
Wojcicki, Anne 341
World Cup Final, 2014 36, 37, 63
World Food Conference, Rome, 1974 5
World Health Organization 10, 11, 13
writing: algorithmic organization of societies and 160–3; invention of 157–60, 384–5; shaping reality through 163–78
Yersinia pestis 7, 7
Zeus 47, 177
YUVAL NOAH HARARI has a PhD in history from the University of Oxford and now lectures at the Department of History at the Hebrew University of Jerusalem, specializing in world history. His first book, Sapiens, was translated into more than forty languages and became a bestseller in the US, the UK, France, China, Korea, and numerous other countries.
Discover great authors, exclusive offers, and more at hc.com.
Sapiens: A Brief History of Humankind
COVER DESIGN BY SUZAN NEEAN
COVER ILLUSTRATION © WWW.STUARTDALY.COM
HOMO DEUS. Copyright © 2017 by Yuval Noah Harari. All rights reserved under International and Pan-American Copyright Conventions. By payment of the required fees, you have been granted the nonexclusive, nontransferable right to access and read the text of this e-book on-screen. No part of this text may be reproduced, transmitted, decompiled, reverse-engineered, or stored in or introduced into any information storage and retrieval system, in any form or by any means, whether electronic or mechanical, now known or hereafter invented, without the express written permission of HarperCollins e-books.
Translated by the author.
First published as The History of Tomorrow in Hebrew in Israel in 2015 by Kinneret Zmora-Bitan Dvir.
Previously published in Great Britain in 2016 by Harvill Secker, a division of Penguin Random House Group Ltd.
FIRST U.S. EDITION
Print ISBN: 9780062464316
EPub Edition FEBRUARY 2017 ISBN 9780062464354
Australia
HarperCollins Publishers Australia Pty. Ltd.
Level 13, 201 Elizabeth Street
Sydney, NSW 2000, Australia
Canada
HarperCollins Canada
2 Bloor Street East - 20th Floor
Toronto, ON M4W 1A8, Canada
New Zealand
HarperCollins Publishers New Zealand
Unit D1, 63 Apollo Drive
Rosedale 0632
Auckland, New Zealand
United Kingdom
HarperCollins Publishers Ltd.
1 London Bridge Street
London SE1 9GF, UK
United States
HarperCollins Publishers Inc.
195 Broadway
New York, NY 10007
* The formula takes a multiplication symbol because the elements work one on the other. At least according to medieval scholastics, you cannot understand the Bible without logic. If your logic value is zero, then even if you read every page of the Bible, the sum of your knowledge would still be zero. Conversely, if your scripture value is zero, then no amount of logic can help you. If the formula used the addition symbol, the implication would be that somebody with lots of logic and no scriptures would still have a lot of knowledge – which you and I may find reasonable, but medieval scholastics did not.
* In American politics, liberalism is often interpreted far more narrowly, and contrasted with ‘conservatism’. In the broad sense of the term, however, most American conservatives are also liberal.